Contribute
Register

Need Opinion about the Optimal Boot and Storage Configuration Z68X-UP5-TH

Status
Not open for further replies.
Joined
May 3, 2012
Messages
151
Motherboard
Gigabyte X79-UD3
CPU
i7 X9730k
Graphics
Nvidia GeForce GTX670 OC
Mac
  1. MacBook Pro
Classic Mac
  1. 0
Mobile Phone
  1. Android
So I think I am going to go with the following board: GA-Z77X-UP5-TH (http://www.gigabyte.us/products/product-page.aspx?pid=4279&dl=1#sp) and wanted to get your opinion about the optimal boot and storage configuration. I guess my main question concerns the various SATA ports on mobo's. Can I RAID with 6gb/s and 3 gb/s ports together and if so do I lose performance? Should I use the 3 or 6 for the OS, bear in mind that I am going to be editing off of my HDD RAID array (want to get at least 6TB which would mean a 3/4 disk array). All drives are the ones that you recommend in your build

The GA-Z77X-UP5-TH has:
2 x SATA 6Gb/s connectors (SATA3 0/SATA3 1) supporting up to 2 SATA 6Gb/s devices: RAID0 Western Digital WD Black WD2002FAEX 2TB 7200 RPM 64MB Cache
4 x SATA 3Gb/s connectors (SATA2 2~5) supporting up to 4 SATA 3Gb/s devices: SAMSUNG 840 Pro Series 2.5": should I do two 256GB in RAID0 or one 512GB. I'm worried that if I software RAID both my boot and storage (especially with a 4 HDD RAID0 array) I am going to lose computing power. As we spoke about before they have these RocketRAID devices: http://hptmac.com/product.php?_index=94. But the issues here is that it is going to share bandwidth with the x16 slot which contains my graphics card and I feel like i need all the bandwidth available for that since this is an editing station.

The board that you chose, GIGABYTE GA-X79-UD3 reads:
6 x SATA 6Gb/s

SATA RAID
X79 chipset:
2 x SATA 6Gb/s connectors (SATA3 0/1) and 4 x SATA 3Gb/s connectors (SATA2 2/3/4/5) support RAID 0, RAID 1, RAID 5, and RAID 10

I'm confused about how there are 6 x SATA 6gb/s with only 2 SATA 6gb/s connectors.

I need want to pull the trigger today/tomorrow for a monster machine, but really need to figure out these storage/boot options before I move forward because if I am left with bottlenecks anywhere the rest of the machine won't matter.

As always, I appreciate your advise.
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

I think I answered my own question. Sata 6 ports are for the SSD has only SSD's can take advantage. The SATA 3 ports is where I will put the HDD's. So the plan now is to buy the same machine as your build, going with the Core i7-3930k CPU, the GTX 680 (4GB VRAM, saw some posts that explain setting up with OpenGL and Cuda...and it is my understanding that more and more editing suites are using video cards).

Media/working drives: Im going with a 2 or 3 drive 3TB RAID0 HDD for my working HD (depending on benchmarks once setup)
OS and apps: 2 samsung 840 256GB SSD RAID0 (less expensive and more performance than 840 pro 512GB)
Backup solution: Drobo NAS

My concerns are the following:
1. Setting up the RAID arrays
2. The hit that the CPU is going to take with software RAID
3. Getting the GPU to work in it's full capacity
4. Overclocking, since I have never done it before.
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

I think I answered my own question. Sata 6 ports are for the SSD has only SSD's can take advantage. The SATA 3 ports is where I will put the HDD's.
Sensible decision. However, "SATA 3" usually refers to "SATA-III" which is 6 Gbps. "SATA 2" (or "SATA-II") runs at 3 Gbps. Be careful you don't confuse anyone (including yourself :)).
Also you can easily add more SATA-III ports in groups of 2 or 4 via PCIe cards (I have 2x 2-port ASM1061 cards in my Z77X-UP5 TH at the moment and may add a third: they're cheap) to give you more expansion in the future.

Media/working drives: Im going with a 2 or 3 drive 3TB RAID0 HDD for my working HD (depending on benchmarks once setup)
OS and apps: 2 samsung 840 256GB SSD RAID0 (less expensive and more performance than 840 pro 512GB)
Backup solution: Drobo NAS
There are obviously lots of factors in a backup solution design. A system like this sounds like a "serious" system, so hopefully you'll be maintaining at least two backups of your data (on separate devices, and preferably in separate physical locations) and be confident in knowing where to find and how to restore all your important data. Speed of access will affect both backups and restores.
So have you considered something like a Drobo 5D (connected by Thunderbolt if you had it) for the first level of backups, and either another of the same or a 5N for your second level?

My own data is currently ~6.5 TB in volume, and my backups are to individual 3.5" drives in SATA docks. The weekly and monthly backups are just sets of drives which get slotted in, verified and updated (and verified again) and then removed and stored in carriers (easily transported offsite as a box of drives, and easily restored from as long as I have an OS X machine with a SATA dock, and the filesystem decryption keys: no special Drobos required for restoration). This is why I have 12 SATA ports in my machine at the moment (plus external FW800 and USB3 docks if I need them).
You don't necessarily have to do things the same way, but I strongly recommend you think the whole process through if your data and your time are valuable. Funny how it's only those of us who lose files at some stage then are the ones who get serious about backups. :) I established this system while on a FW800+USB2-only machine, and am revelling in the speed of my new workstation. It used to be that if I replaced a whole failed drive (e.g. one of the backups: every drive fails eventually) and kicked off a new backup that had to copy a LOT of new data that the process could take a weekend. Having so much of my data on direct SATA and eSATA connections now has made a massive difference to the "straightline" speed of each drive and also allowed the parallel backups to each drive to not be bottlenecked by common I/O busses.

2. The hit that the CPU is going to take with software RAID
RAID0 is minimal overhead. But be conscious that with a 6 or 9 TB volume that's depending on 3 disks with no redundancy, when one of those disks throws an error (when, not if) you will have a significant interruption while you get a replacement disk, make a new RAID array, and copy all those files back from the most recent backup. Be prepared for it.

Mind you, if you're after killer speed rather than massive volume, you'll probably only be using the first 1.5 TB or so of each of those drives, giving you 3 or 4.5 TB of volume. That's still quite large.

4. Overclocking, since I have never done it before.
My advice: get the machine set up and running, and then play with OC.
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

So I think I am going to go with the following board: GA-Z77X-UP5-TH (http://www.gigabyte.us/products/product-page.aspx?pid=4279&dl=1#sp) and wanted to get your opinion about the optimal boot and storage configuration. I guess my main question concerns the various SATA ports on mobo's. Can I RAID with 6gb/s and 3 gb/s ports together and if so do I lose performance? Should I use the 3 or 6 for the OS, bear in mind that I am going to be editing off of my HDD RAID array (want to get at least 6TB which would mean a 3/4 disk array). All drives are the ones that you recommend in your build

The GA-Z77X-UP5-TH has:
2 x SATA 6Gb/s connectors (SATA3 0/SATA3 1) supporting up to 2 SATA 6Gb/s devices: RAID0 Western Digital WD Black WD2002FAEX 2TB 7200 RPM 64MB Cache
4 x SATA 3Gb/s connectors (SATA2 2~5) supporting up to 4 SATA 3Gb/s devices: SAMSUNG 840 Pro Series 2.5": should I do two 256GB in RAID0 or one 512GB. I'm worried that if I software RAID both my boot and storage (especially with a 4 HDD RAID0 array) I am going to lose computing power. As we spoke about before they have these RocketRAID devices: http://hptmac.com/product.php?_index=94. But the issues here is that it is going to share bandwidth with the x16 slot which contains my graphics card and I feel like i need all the bandwidth available for that since this is an editing station.

The board that you chose, GIGABYTE GA-X79-UD3 reads:
6 x SATA 6Gb/s

SATA RAID
X79 chipset:
2 x SATA 6Gb/s connectors (SATA3 0/1) and 4 x SATA 3Gb/s connectors (SATA2 2/3/4/5) support RAID 0, RAID 1, RAID 5, and RAID 10

I'm confused about how there are 6 x SATA 6gb/s with only 2 SATA 6gb/s connectors.

I need want to pull the trigger today/tomorrow for a monster machine, but really need to figure out these storage/boot options before I move forward because if I am left with bottlenecks anywhere the rest of the machine won't matter.

As always, I appreciate your advise.

2 x 6G ports 4x3G ports.

Having a soft RAID wont cost you very much in terms of CPU cycles. However soft RAID isnt the best way to go if you want performance RAID. Also I dont know that you can boot a hack from a soft raid. RAID of SSD becomes noticeable only with large files. However RAIDing your SSD will mean that you loose TRIM support which will end up costing you drive performance. Also doubling two SATAIII 6Gb/s drives will not give you 12Gb/s as the controller itself can only cope with Gb/s. If you want a performance SSD setup, use a PCI solution like gxsolcae has.

Do a search for your board on this forum and ask others. Its a Z77 board so I can see no issues with it at all. Do you really need thunderbolt?

I use my rig for photo editting and I do this:
1x 250 Samsung SSD as OS + current project.
1x 128 Patriot wildfire as backup for 'project' and lightroom catalogue
2 x WD 2TB black 1 as an older archive, 1 as a main storage - these disks are where the RAWS stay.
Built in SATA dock on the case. Each night I backup the current project and lrcat the the little patriot SSD. Takes 2-3 minutes depending on where I am at, bearing in mind I shoot massive all day weddings with a 5D3 and always convert to 16bit Tiff, work up in PS before batch output to PS.

The idea is to get the SSD's to do the day to day heavy lifting with the WD Blacks acting as high capacity, high reliability, accessible archives.

If your doing photo editing then the biggest bottleneck will may be Lightroom itself unfortunately, extra drive speed past an SSD is unlikely to improve its performance much as its a really CPU based application. If you are video editing then I hope I have been of some help but I honestly think that gxsolace angelwing things is the way to go for performance I/O.


As for OC, sandybridge is a bit more overclockable than Ivy which runs quite hot.

DROBO = cool if you want all your data locked away in a proprietary format.
Ben
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

I think I answered my own question. Sata 6 ports are for the SSD has only SSD's can take advantage. The SATA 3 ports is where I will put the HDD's. So the plan now is to buy the same machine as your build, going with the Core i7-3930k CPU, the GTX 680 (4GB VRAM, saw some posts that explain setting up with OpenGL and Cuda...and it is my understanding that more and more editing suites are using video cards).

Media/working drives: Im going with a 2 or 3 drive 3TB RAID0 HDD for my working HD (depending on benchmarks once setup)
OS and apps: 2 samsung 840 256GB SSD RAID0 (less expensive and more performance than 840 pro 512GB)
Backup solution: Drobo NAS

My concerns are the following:
1. Setting up the RAID arrays
2. The hit that the CPU is going to take with software RAID
3. Getting the GPU to work in it's full capacity
4. Overclocking, since I have never done it before.

Hi Dnk -

Since several people have answered your other questions, I'm going to address the one about the Samsung 840 vs. 840 Pro, which technically wasn't a question. However, keep in mind there are several very key differences between the 840 and 840 Pro:

1/ The 840 uses 3-layer MLC NAND (known as TLC NAND), the 840 Pro uses 2-layer MLC
2/ The 840 has a 3-years warranty, the 840 Pro has 5-years warranty
3/ TLC NAND is slower than MLC NAND, both in read/write time AND IOPS
4/ TLC NAND has much less P/E (Program/Erase) cycles than MLC NAND (1000 vs. 3000)
5/ TLC NAND has slower erase times than MLC NAND

There are reasons for this. TLC NAND is technically a lower quality MLC NAND, which is used in the 840, hence it's lower 3-year warranty. My advice to you is to go for two 256GB Samsung 840 Pros: You'll get far better reliability/longevity, a greater peace of mind, faster performance, and a few extra GB's more of storage.

/ gx
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

2 x 6G ports 4x3G ports.
I use my rig for photo editing and I do this:
1x 250 Samsung SSD as OS + current project.
1x 128 Patriot wildfire as backup for 'project' and lightroom catalogue
2 x WD 2TB black 1 as an older archive, 1 as a main storage - these disks are where the RAWS stay.
Built in SATA dock on the case. Each night I backup the current project and lrcat the little patriot SSD. Takes 2-3 minutes depending on where I am at, bearing in mind I shoot massive all day weddings with a 5D3 and always convert to 16bit Tiff, work up in PS before batch output to PS.

The idea is to get the SSD's to do the day to day heavy lifting with the WD Blacks acting as high capacity, high reliability, accessible archives.
Ben

Very interesting setup and great call on the dedicated SSD for the lightroom catalogue. My main business is video production but spend considerable time in lightroom processing photos. Do you find an increase in overall Lightroom speed? I too shoot 5DMiii, but have been shooting at lower RAW resolution because of the editing constraints; would love to get beyond that.

Sensible decision. However, "SATA 3" usually refers to "SATA-III" which is 6 Gbps. "SATA 2" (or "SATA-II") runs at 3 Gbps. Be careful you don't confuse anyone (including yourself :)).
Also you can easily add more SATA-III ports in groups of 2 or 4 via PCIe cards (I have 2x 2-port ASM1061 cards in my Z77X-UP5 TH at the moment and may add a third: they're cheap) to give you more expansion in the future.

With all the new information this week I was getting a little confused, thanks for clearing it up. When I check out the specs of the various video cards it notes that PCI-16x performance is halved when additional PCIe cards are added to the mix. I want to take full advantage of the video card, which to me means that I don't want to add additional PCIe cards of any kind. Is that incorrect thinking?

There are obviously lots of factors in a backup solution design. A system like this sounds like a "serious" system, so hopefully you'll be maintaining at least two backups of your data (on separate devices, and preferably in separate physical locations) and be confident in knowing where to find and how to restore all your important data. Speed of access will affect both backups and restores.
So have you considered something like a Drobo 5D (connected by Thunderbolt if you had it) for the first level of backups, and either another of the same or a 5N for your second level?

My own data is currently ~6.5 TB in volume, and my backups are to individual 3.5" drives in SATA docks. The weekly and monthly backups are just sets of drives which get slotted in, verified and updated (and verified again) and then removed and stored in carriers (easily transported offsite as a box of drives, and easily restored from as long as I have an OS X machine with a SATA dock, and the filesystem decryption keys: no special Drobos required for restoration). This is why I have 12 SATA ports in my machine at the moment (plus external FW800 and USB3 docks if I need them).
You don't necessarily have to do things the same way, but I strongly recommend you think the whole process through if your data and your time are valuable. Funny how it's only those of us who lose files at some stage then are the ones who get serious about backups. :) I established this system while on a FW800+USB2-only machine, and am reveling in the speed of my new workstation. It used to be that if I replaced a whole failed drive (e.g. one of the backups: every drive fails eventually) and kicked off a new backup that had to copy a LOT of new data that the process could take a weekend. Having so much of my data on direct SATA and eSATA connections now has made a massive difference to the "straightline" speed of each drive and also allowed the parallel backups to each drive to not be bottlenecked by common I/O busses.
I went ahead and ordered the Drobo 5N for backups. Since I went with the sandy bridge build I am not going to have thunderbolt access so might be best served to go with a less expensive model that has esata for faster nightly backups (although I intend to backup to the 5N every night in the meantime).

This brings me to my current backup process question. I currently use Chronosync to facilitate the backups. It doesn't seem like there is any software out there that is as smart as we are (guess that is a good thing??) meaning that whenever I change folder structure (that includes a folder name, which occurs in FCPX whenever you rename an "event") the software does not know what to do and will copy the entire folder because it does not think that any of the media exists on the backup drive.

I have been thinking about the solution and wanted to see what everyone thought about this basic process for the 5N (which will be RAID5 with 5 3TB HDD and mSSD:
1. Separate volumes dedicated to each device that is being backed up on the network (main computer, laptop, HTPC etc)
2. Mirror backup or carbon copy clone of each device. The issue here is that if I need to make a small change to a project on the network drive (so small that it does not need to be moved to working RAID to make) I don't want to copy over the most recent file
-What it comes down to is that I need to do major research on the complete functionality of backup software. As a starting point I was interested to hear what everyone else's processes were

Since several people have answered your other questions, I'm going to address the one about the Samsung 840 vs. 840 Pro, which technically wasn't a question. However, keep in mind there are several very key differences between the 840 and 840 Pro:

1/ The 840 uses 3-layer MLC NAND (known as TLC NAND), the 840 Pro uses 2-layer MLC
2/ The 840 has a 3-years warranty, the 840 Pro has 5-years warranty
3/ TLC NAND is slower than MLC NAND, both in read/write time AND IOPS
4/ TLC NAND has much less P/E (Program/Erase) cycles than MLC NAND (1000 vs. 3000)
5/ TLC NAND has slower erase times than MLC NAND

There are reasons for this. TLC NAND is technically a lower quality MLC NAND, which is used in the 840, hence it's lower 3-year warranty. My advice to you is to go for two 256GB Samsung 840 Pros: You'll get far better reliability/longevity, a greater peace of mind, faster performance, and a few extra GB's more of storage.

/ gx
I went ahead and returned the samsung 840's based on this information along with morganb's comments about TRIM and 6gb/s max of the controller. I have determined that I'm not going to need the extra umpff that would come with a RAID OS drive and probably would do without the headache. Please note my previous response about losing PCI 16x performance with the addition of PCIe cards. Have you experienced this by putting your crazy awesome SSD RAID platform in? And just for informational purposes, DBP wasn't sure if standard SSD's in a RAID config were bootable. I have read that they are, but can you confirm.

Great stuff guys, thanks!!
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

Shouldn't we take this out of gxsolace's build post and have a separate thread?

Very interesting setup and great call on the dedicated SSD for the lightroom catalogue. My main business is video production but spend considerable time in lightroom processing photos. Do you find an increase in overall Lightroom speed? I too shoot 5DMiii, but have been shooting at lower RAW resolution because of the editing constraints; would love to get beyond that.
Many catalog operations are I/O-bound. In my workstation I have a 240 GB SanDisk Extreme SSD dedicated to catalogs (and to Photoshop scratch, as usually it's one program or the other that's actually active at any one point in time). I definitely notice the difference in startup time with my 70,000+ image catalogs and when importing images in place ("Add"ing) even when those images are on other "normal" disks.

With all the new information this week I was getting a little confused, thanks for clearing it up. When I check out the specs of the various video cards it notes that PCI-16x performance is halved when additional PCIe cards are added to the mix. I want to take full advantage of the video card, which to me means that I don't want to add additional PCIe cards of any kind. Is that incorrect thinking?
This is dependent on the details of your motherboard. For instance, in many boards there's a x16 slot and a x8 slot, although if the x8 slot is occupied then the "x16" slot only runs at x8 also. However usually the x4 and x1 slots are getting different channels from the chipset. So it all depends on the specific motherboard and which slots you put the cards in (and note that most SATA cards are only x1 anyway).

It doesn't seem like there is any software out there that is as smart as we are (guess that is a good thing??) meaning that whenever I change folder structure (that includes a folder name, which occurs in FCPX whenever you rename an "event") the software does not know what to do and will copy the entire folder because it does not think that any of the media exists on the backup drive.
True. My own backup synchronization software (build on top of rsync) has the same issue. Because it identifies a file by its pathname, if that name changes it's seen as a new file (ie. a deletion and an addition, not a rename). I don't go through a lot of renames so it hasn't greatly worried me. And when I do have a massive reorganisation the backups just take a bit longer to run. :)

Time Machine can do it, as it has access to the filesystem event log. And it always knows that the backup destination is HFS+ that it can do special things to (when it's backing up to a network drive or non-HFS drive it's actually backing up to an HFS+ "diskbundle" virtual filesystem stored on that drive). I think this is one of the reasons Time Machine won't backup drives it sees as "external". And when the backup volume gets corrupted it all goes to pot. It's not unusual to have to eventually throw away a TM backup and start afresh.
The systems that backup to "normal" filesystems can't do this, but the backup copies are very robust.

I have been thinking about the solution and wanted to see what everyone thought about this basic process for the 5N (which will be RAID5 with 5 3TB HDD and mSSD:
1. Separate volumes dedicated to each device that is being backed up on the network (main computer, laptop, HTPC etc)
2. Mirror backup or carbon copy clone of each device. The issue here is that if I need to make a small change to a project on the network drive (so small that it does not need to be moved to working RAID to make) I don't want to copy over the most recent file
-What it comes down to is that I need to do major research on the complete functionality of backup software. As a starting point I was interested to hear what everyone else's processes were.
  • I have a local TM disk for most machines, but TM is set to exclude the folders containing my Lightroom catalogs and image trees. It's also ignoring the scratch folders I use. Thus it backs up the OS, Applications, home directories, etc.
  • On my server/workstation I backup to multiple TM drives, at least one of which is off-site at any point. It's brought back and plugged into a dock as part of the monthly cycle. I'm in the process of setting up network TM from the other machines on the LAN, each of which will use a diskbundle on one of the server disks. Am aiming to make this on another drive which is cycled offsite. For the last few years with 10.6 we've been making dmgs of each machine's TM drive at the start of each month and storing those on a drive which is cycled offsite. Am hoping to simplify this process so that each machine backs up to the network TM drive as well as its local TM drive (and the network drive is cycled offsite monthly).
  • On each CustoMac I have a clone partition which the boot volume is cloned to semi-regularly (I need to know the boot clone is in a working state). I could use CCC, but at the moment I'm using Super Duper! for this.
  • Each of the datasets on the systems (the workstation/server and only two of the clients have these) are in the backup system as having primary copies "where they live", and secondary copies on drives which are cycled offsite. Because each set has multiple secondaries, at backup time it's only those secondaries that are on mounted volumes that are updated. For example I might have /Volumes/Store11 which contains media-N00_sec and CatalogsD_sec, and /Volumes/Store13 containing the same folders (and the "primary" media-N00 and CatalogsD stored in their "home" locations). As long as Store13 is offsite and Store11 is here (or vice versa), all works. Actually, I have at least three backups of each set.
    EDIT: to be clear, I could have both Store11 and Store13 connected at once, although I would be opening myself up to unnecessary risk if I didn't have one other offline copy.
I only use cloning of boot drives (on both my CustoMacs, as well as some Apple machines). In addition to that I have TM backing up the OS/applications/etc of each, and my synchronization software backing up the meaty data.

And just for informational purposes, DBP wasn't sure if standard SSD's in a RAID config were bootable. I have read that they are, but can you confirm.
They are on Apple hardware. It's just a question of whether Chameleon can cope. Hopefully yes.
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

I went ahead and returned the samsung 840's based on this information along with morganb's comments about TRIM and 6gb/s max of the controller. I have determined that I'm not going to need the extra umpff that would come with a RAID OS drive and probably would do without the headache. Please note my previous response about losing PCI 16x performance with the addition of PCIe cards. Have you experienced this by putting your crazy awesome SSD RAID platform in? And just for informational purposes, DBP wasn't sure if standard SSD's in a RAID config were bootable. I have read that they are, but can you confirm.

Great stuff guys, thanks!!

Hi Dnk -

RAID configurations are indeed bootable using Chameleon and variations of it, like Chimera. There was an issue once where a FaceTime fix was implemented in the branches that would break RAID support, but that has since been fixed. It's a bit of a process to get the RAID bootable, but I know there's a YouTube video where someone demonstrates the process of setting it up.

I did get my PCIe SSD RAID array to boot but that required a lot of time. Essentially, I had to install OS X on a single drive, install the driver for the PCIe RAID cards, then clone that copy of OS X over to the array, and finally go through the process of making it bootable.

The one issue that I had with RAID (maybe it doesn't occur on a 2-drive array), was EBIOS read errors during Chimera/Chameleon load time prior to the boot selection menu appearing. Often, the array would be rendered unbootable because of this, and I'd have to boot into another volume, clear boot caches, and the error would disappear and I can then boot off the array again. It was a huge annoyance, which may have been due to the fact that my array for booting OS X consisted of 8 drives (lol), but I simply got tired of it. So now OS X is installed on a single Samsung 840 Pro, which is frankly, plenty fast just for the OS to boot.

As far as losing PCIe performance, you're not going to see any real-world performance drop. Depending on your motherboard's slot arrangement, PCIe slots will be paired, so putting in a PCIe card in a paired slot that has your graphics card in, would lower the slot of the graphics card from x16 down to x8. But frankly speaking, you won't even notice the hit, as even at x8, there's still plenty of bandwidth left.

The motherboard manufacturer's website (in my case, Gigabyte), will indicate which slots are shared with other slots.

I hope this helps. :)

/ gx
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

This brings me to my current backup process question. I currently use Chronosync to facilitate the backups. It doesn't seem like there is any software out there that is as smart as we are (guess that is a good thing??) meaning that whenever I change folder structure (that includes a folder name, which occurs in FCPX whenever you rename an "event") the software does not know what to do and will copy the entire folder because it does not think that any of the media exists on the backup drive.

Hi Dnk -

I forgot this bit you wrote above.

There are two main types of backup strategies: file-level, and block-level backups. File-level blocks up only recognizes file/directory changes, and is largely inefficient. This is because it will backup entire files, regardless of what changes occurred. What if only 1-byte of a 1GB file changed? A file-level backup solution would make a backup of the entirely new 1GB file. You can see how this can be incredibly space and time consuming.

Block-level backup systems examine changes at a deeper file system level (the block level), and only backs up the change -- commonly referred to as a differential backup. So that 1GB file that had just 1-byte change? The new back-up of that file would just be 1-byte (plus the size of the block-location of said change). Give GoodSync for Mac a shot. It is a block-level aware backup solution. It's implementation is not as elegant as Time Machine is, but it is a true block-level backup solution, and you can even block-level sync from one machine to another.

Time Machine is incredibly good in its implementation, but it is not yet block-aware. One day if that should change, it would be perfect.

Hope that helps you.

/ gx
 
Core i7-3970X Monster SSD Build, Geekbench Score: 20313

2 x 6G ports 4x3G ports.

Having a soft RAID wont cost you very much in terms of CPU cycles. However soft RAID isnt the best way to go if you want performance RAID. Also I dont know that you can boot a hack from a soft raid. RAID of SSD becomes noticeable only with large files. However RAIDing your SSD will mean that you loose TRIM support which will end up costing you drive performance. Also doubling two SATAIII 6Gb/s drives will not give you 12Gb/s as the controller itself can only cope with Gb/s. If you want a performance SSD setup, use a PCI solution like gxsolcae has.

Hi morganb -

Hehe, thanks for the mention! :)

But actually, this is technically not correct. All current Intel chipsets come with two SATA 6Gbps ports that offer a full 6Gbps full-duplex throughput on both SATA channels (both ports). This means that on each port, a drive can transfer AND receive 6Gbps, AND if both ports are used, a full 6Gbps bandwidth in both directions on both ports. You will attain a theoretical doubling of performance if you RAID two SSDs in RAID 0, with both drives consuming both ports.

However, there's a reason there's only 2 SATA 6Gbps ports: since SATA bandwidth is traversing over PCI-express lanes technically, it is largely dependent on the lane bandwidth. Theoretically, you will see a reduction in speed if for example the motherboard came with 6 or 8 SATA 6Gbps ports, and ALL ports were being used simultaneously at 6Gbps, then for sure you will not reach those speeds. But since there's only 2 ports (through the Intel chipset), you can get full bandwidth -- a combined full-duplex throughput of 12Gbps in each direction using both ports in RAID 0.

/ gx
 
Status
Not open for further replies.
Back
Top