Contribute
Register

Moving my boot volume to APFS RAID

Status
Not open for further replies.
Joined
Mar 29, 2011
Messages
844
Motherboard
ASRock X99 Extreme6
CPU
E5-2690 v4
Graphics
Radeon VII
Recently I got a Supermicro AOC-SLG3-2M2 dual M.2 card. You can use two x4 M.2 NVMe drives in this card, and it's inexpensive, but the PCI-e slot has to support bifurcation. Well, how to do this depends on your chipset. The newest ones can handle it automatically. It's not so simple on most X99 boards. I followed this guide on modifying the UEFI variables for IIO configuration to bifurcate my bottom PCI-e slot: https://www.win-raid.com/t3323f16-Guide-How-to-Bifurcate-a-PCI-E-slot.html

I found that the offset the author found was the same for me, 0x539, and that I also only had to change that one value to the x4x4 value. I used the same UEFI shell USB that I already have for running the bcfg boot commands. It was able to run RU, where I opened UEFI variables with Alt-= and then made the changes in IntelSetup as described.

My system runs Mojave, by the way. High Sierra or Catalina could also make sense for doing this. I don't know if there are differences, though.

So with both SSDs recognized, I ran Disk Utility and started RAID Assistant. I selected the two for an APFS RAID 0 volume and created it. At this point I had some trouble cloning to the new volume with Disk Utility (big surprise), and so what I recommend is to use the command line:

Code:
sudo asr restore --source /dev/diskN --target /dev/diskM --erase

Where N is the number for the APFS container of the running Mojave system. And M is the number for the APFS container that was just created in Disk Utility. It is not the number of either of the source disks in the RAID set!

There was a minor error message at the very end, as I recall, that I ignored. At this point, I also copied over my EFI folder to one of the SSDs in the RAID set--each get created with empty EFI partitions--and I set up my BIOS to boot from there.

There's one last trick, because at this point the boot-up is stuck in a reboot loop, involving the caches rebuild. I found the solution to it here: https://forums.macrumors.com/thread...-a-howto.2125096/?post=26636374#post-26636374

You just have to boot into single user mode, remount root read/write, and delete the file /usr/standalone/bootcaches.plist

It's feeling pretty snappy. The benchmark result is crazy fast!
 

Attachments

  • Screen Shot 2020-01-14 at 5.10.55 AM.png
    Screen Shot 2020-01-14 at 5.10.55 AM.png
    1.4 MB · Views: 129
Oh, I forgot to mention one thing. I did create SSDTs for my SSDs in the array, also. The first one was already covered by the ANS2 SSDT I had created for the M.2 drive that was previously in the same slot. That one is for PCI0.BR1A, disables the name H000 and defines ANS2. The other one, for the second SSD, looks similar:

Code:
DefinitionBlock ("", "SSDT", 1, "KGP", "X99ANS3", 0x00001000)
{
    External (_SB_.PCI0.BR1B, DeviceObj)    // (from opcode)
    External (_SB_.PCI0.BR1B.D076, DeviceObj)    // (from opcode)
    External (_SB_.PCI0.BR1B.H000, DeviceObj)    // (from opcode)
    External (DTGP, MethodObj)    // 5 Arguments (from opcode)

    Scope (_SB.PCI0.BR1B)
    {
        Scope (H000)
        {
            Name (_STA, Zero)  // _STA: Status
        }

        Scope (D076)
        {
            Name (_STA, Zero)  // _STA: Status
        }

        Device (ANS3)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
            {
                Store (Package (0x0C)
                    {
                        "AAPL,slot-name", 
                        Buffer (0x09)
                        {
                            "Built In"
                        }, 

                        "built-in", 
                        Buffer (One)
                        {
                             0x00                                           
                        }, 

                        "device-id", 
                        Buffer (0x04)
                        {
                             0x01, 0xA8, 0x00, 0x00                         
                        }, 

                        "device_type", 
                        Buffer (0x10)
                        {
                            "AHCI Controller"
                        }, 

                        "name", 
                        Buffer (0x1E)
                        {
                            "Intel Chipset AHCI Controller"
                        }, 

                        "model", 
                        Buffer (0x22)
                        {
                            "Intel X99 Chipset AHCI Controller"
                        }
                    }, Local0)
                DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                Return (Local0)
            }
        }
    }
}

The device name is changed to PCI0.BR1B, and we have to disable both H000 and D076 as names. Now it shows up in IOreg as ANS3, and it has the built-in property.

I did this before creating the RAID array.
 
I'm beginning to realize that there's a performance tradeoff involved. Somewhat lower IOPS for small random access like 4k sectors, in exchange for much higher sequential transfer rates. So really, what I should be doing is using the fastest single NVMe SSD that I can afford as my boot volume, and this RAID array as my data drive. Live and learn!
 
Status
Not open for further replies.
Back
Top