Contribute
Register

[Success] AMD RX6000 Series working in macOS

Thanks. Your solutions, as usual, reliable.

Are we arranging for 13900K tests?
I am actually quite intrigued by AMD's forthcoming X670E chipset and Ryzen 7000-series of processors. The best part: Ryzen 7000 has zero E-cores and is still relatively energy efficient.

The standard Apple M1 has 4 P-cores and 4 E-cores. When creating the M1 Pro, however, what did Apple do? They increased the P-cores to 8 and decreased the E-cores to 2. In other words:
  • M1 Pro has 100% more P-Cores
  • M1 Pro has 50% fewer E-Cores
When Intel decided to create Raptor Lake from Alder Lake, what did they do?
  • Just increased the number of E-cores
  • No change in P-core count
  • Higher IPC (instructions per clock)
Who gets all excited about E-cores?

AMD is using the entire silicon area for nothing but P-cores. For desktop this is great (IMHO), but for laptops AMD does have a "marketing" disadvantage.

macOS (x86 version, not ARM version) does not differentiate between P-cores and E-cores, so for Hackintosh I would rather choose a processor with all P-cores.
 
Are we arranging for 13900K tests?
As core counts and clock speeds go up so will the power draw and TDP. When will it hit a ceiling ? Where it's just not worth it to keep heading in this direction ? What continues to go up must come down eventually. If I have a 12900K what's the point of upgrading to a 13900K ? I know, we need progress but what about more efficiency ? Performance per watt. Intel has added efficiency cores but as CaseySJ points out, they're mostly useless in a hackintosh. "Who gets all excited about E-cores?" They really don't increase the overall CPU efficiency the same way increased performance per watt of the P cores would.

Screen Shot.jpg

The Intel Core i9-13900K CPU will offer a total of 24 cores and 32 threads. These are divided into an 8+16 configuration which includes 8 P-Cores based on the Raptor Cove and 16 E-Cores based on the Gracemont core architecture. All of these cores are fabricated on the 10nm ESF or 'Intel 7' process node. Given that the chip will feature more cores and cache using the same process node, the overall die size would be a bit bigger than Alder Lake's C0 desktop die.

Twenty four cores is impressive but when they are adding 8 more "Atom" E cores to reach that number, why bother ? The 13900K retail will probably be about $599. A lot for a CPU clock speed bump and 8 more E cores that really only add extra L2/L3 cache. Don't know what software might utilize those 8 E cores. Maybe Blender, FCP and Premiere Pro. How much would they speed up your workflow ? We'll have to see later in 2022.
 
Last edited:
Hi I need a little help.

What I am doing wrong?

I added another AMD RX 6950XT.

I used @zirkaiva solution for my first AMD RX 6950 XT.
glxutil
a3:00.0 1002:66af
/PC01@0/BR1A@0/SL01@0/pci-bridge@0/GFX0@0 = PciRoot(0x1)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(A0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC01)#ACPI(BR1A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG0", 0x00000000)
{
    External (_SB_.PC01.BR1A.PEGP, DeviceObj)
    External (_SB_.PC01.BR1A.SL01, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC01.BR1A.PEGP)
    {
        Device (BRG0)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-1"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC01.BR1A.SL01._STA, Zero)  // _STA: Status
}

Now for the second GPU I duplicated and edited myself but I must be doing something wrong.

glxutil
c3:00.0 1002:73a5 /PC02@0/BR2A@0/SL05@0/pci-bridge@0/GFX0@0 = PciRoot(0x2)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(C0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC02)#ACPI(BR2A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

I am using this SSDT as SSDT-X299X-BRG1.aml

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG1", 0x00000000)
{
    External (_SB_.PC02.BR2A.SL02, DeviceObj)
    External (_SB_.PC02.BR2A.SL05, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC02.BR2A.SL05)
    {
        Device (BRG1)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-2"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC02.BR2A.SL05._STA, Zero)  // _STA: Status
}

The issue is that it does not detect the second GPU and when SLEEP the machine crashes and reboots.

Thanks.

p.s.: I changed SL02 to SL05 and viceversa. Testing now.

FIXED. I permuted SL05 with SL02 ... ... I am sorry.

Sorry. @Feartech you are right. I have been busy with Blender and Unreal ... my bad. Thanks!!!
 
Last edited:
Hi I need a little help.

What I am doing wrong?

I added another AMD RX 6950XT.

I used @zirkaiva solution for my first AMD RX 6950 XT.
glxutil
a3:00.0 1002:66af
/PC01@0/BR1A@0/SL01@0/pci-bridge@0/GFX0@0 = PciRoot(0x1)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(A0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC01)#ACPI(BR1A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG0", 0x00000000)
{
    External (_SB_.PC01.BR1A.PEGP, DeviceObj)
    External (_SB_.PC01.BR1A.SL01, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC01.BR1A.PEGP)
    {
        Device (BRG0)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-1"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC01.BR1A.SL01._STA, Zero)  // _STA: Status
}

Now for the second GPU I duplicated and edited myself but I must be doing something wrong.

glxutil
c3:00.0 1002:73a5 /PC02@0/BR2A@0/SL05@0/pci-bridge@0/GFX0@0 = PciRoot(0x2)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(C0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC02)#ACPI(BR2A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

I am using this SSDT as SSDT-X299X-BRG1.aml

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG1", 0x00000000)
{
    External (_SB_.PC02.BR2A.SL02, DeviceObj)
    External (_SB_.PC02.BR2A.SL05, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC02.BR2A.SL05)
    {
        Device (BRG1)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-2"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC02.BR2A.SL05._STA, Zero)  // _STA: Status
}

The issue is that it does not detect the second GPU and when SLEEP the machine crashes and reboots.

Thanks.
please update your hardware profile to allow others to help you easier
 
Hi I need a little help.

What I am doing wrong?

I added another AMD RX 6950XT.

I've got the same issue but with Dual 6900 XT and Dual Radeon VII. Different motherboards, different systems, different OSs. The 6900 XT is on the ASUS Pro WS C621-64L SAGE/10G with Big Sur 11.6.7 and the Radeon VII in on Supermicro X12SPA-TF with Monterey.

On is detected and the second one is just Display. Been working on it for really long time still no luck.
 
I've got the same issue but with Dual 6900 XT and Dual Radeon VII. Different motherboards, different systems, different OSs. The 6900 XT is on the ASUS Pro WS C621-64L SAGE/10G with Big Sur 11.6.7 and the Radeon VII in on Supermicro X12SPA-TF with Monterey.

On is detected and the second one is just Display. Been working on it for really long time still no luck.
I really don't understand very well the question.

In my System Information in: Graphics / Display I have 2 GPUs now. And in PCI also appears 2 GPUs.

Before when I opened Blender only 1 GPU was showing. After correcting the issue now shows 2 GPUs.

But with the Radeon VII I did not have to create a SSDT for it to show and for I little as I know I think that you don't need to create a SSDT for the 6900XT right?

So your issue is that in both machines only 1 GPU appear?

I think you should give more information as well as upload your EFIs.
 
Hi. I'm looking to buy a 6600 XT to update my old hackintosh rig (currently with High Sierra and Haswell) and at my local store they have the XFX 6600 XT Speedster Swift 210 quite cheap. Does anyone have any experience with it? It's just a regular 6600 XT?

Thanks
 
@gonzas144 your might want to have a read of this post if you are contemplating purchasing and XFX RX6600.

 
Before when I opened Blender only 1 GPU was showing. After correcting the issue now shows 2 GPUs.

What correction if you don't mind me asking?
 
Back
Top