Contribute
Register

[Success] AMD RX6000 Series working in macOS

Joined
Oct 28, 2017
Messages
64
Motherboard
Gigabyte x299x Designare 10G
CPU
i9 10940x
Graphics
2X AMD RX 6950XT
Mac
  1. iMac
  2. MacBook Pro
  3. Mac mini
Mobile Phone
  1. iOS
I am not so sure that would work I had to use zirkaiva's SSDT I could not boot with the device-id spoof settings. So for now I shall just wait and cross fingers for Apple to update AMD kext and include this series because I am in a 3D project and have little time for testing as this is my main machine. In Blender 3.1.2 it works fine, it takes 2X time to render than in windows but that's because Cycles is not optimized for Metal yet.
 
Joined
Feb 16, 2012
Messages
370
Motherboard
Z690 Gaming X
CPU
i9-12900K
Graphics
RX560
Mac
  1. MacBook Air
Classic Mac
  1. Power Mac
Mobile Phone
  1. Android
Are you using WhateverGreen 1.5.2 or newer? If so, try the following:
...
  • Add/use the attached SSDT:
    • SSDT-Z490-GPU-SPOOF.aml
  • This SSDT will work for any system whose base PCI path of the GPU is:
    • _SB_.PCI0.PEG0.PEGP
With WhateverGreen 1.5.2 and newer, it should be possible to just spoof the device-id, which the SSDT attempts to do.
I have new version Power Color rx6900xt ID 73AF. After using your file, it started working properly. In addition to your SSDT-Z490-GPU-SPOOF.AML file, do you need to add something else?
 

CaseySJ

Moderator
Joined
Nov 11, 2018
Messages
19,478
Motherboard
Asus ProArt Z690-Creator
CPU
i7-12700K
Graphics
RX 6800 XT
Mac
  1. MacBook Air
  2. MacBook Pro
  3. Mac Pro
Classic Mac
  1. Quadra
Mobile Phone
  1. iOS
I have new version Power Color rx6900xt ID 73AF. After using your file, it started working properly. In addition to your SSDT-Z490-GPU-SPOOF.AML file, do you need to add something else?
Glad to hear it! No other changes are necessary.

Incidentally, someone just pointed me to this kext, which I haven’t tried myself, but it looks interesting in case you’d like to monitor GPU temp.
 
Joined
Feb 16, 2012
Messages
370
Motherboard
Z690 Gaming X
CPU
i9-12900K
Graphics
RX560
Mac
  1. MacBook Air
Classic Mac
  1. Power Mac
Mobile Phone
  1. Android
Glad to hear it! No other changes are necessary.

Incidentally, someone just pointed me to this kext, which I haven’t tried myself, but it looks interesting in case you’d like to monitor GPU temp.
Thanks. Your solutions, as usual, reliable.

Are we arranging for 13900K tests?
 
Joined
Oct 4, 2018
Messages
831
Motherboard
Gigabyte C246-WU4
CPU
E-2278G
Graphics
WX7100
Mac
  1. MacBook Pro
  2. Mac mini
Are we arranging for 13900K tests?
Isn't it somewhat early? Raptor Lake is not even officially announced, much less released and available at retail.

You're much welcome to initiate a new thread for Raptor Lake, but while the Z690 thread was very hot and fruitful last fall I expect the new thread to be much more sedate. Alder Lake was a significant platform change. SSDT-PLUG-ALT/SSDT-CPUR-Z690 are already written for 64 threads, which is the maximum supported by macOS, and it took less than a month for the Acidanthera team to come up with the right quirk for the hybrid architecture.
Basically, I expect Raptor Lake to be a drop-in upgrade for 600-series motherboards.
700-series might bring in some changes, as was the case between 400 and 500 series, but there's not much more to loose.
 

CaseySJ

Moderator
Joined
Nov 11, 2018
Messages
19,478
Motherboard
Asus ProArt Z690-Creator
CPU
i7-12700K
Graphics
RX 6800 XT
Mac
  1. MacBook Air
  2. MacBook Pro
  3. Mac Pro
Classic Mac
  1. Quadra
Mobile Phone
  1. iOS
Thanks. Your solutions, as usual, reliable.

Are we arranging for 13900K tests?
I am actually quite intrigued by AMD's forthcoming X670E chipset and Ryzen 7000-series of processors. The best part: Ryzen 7000 has zero E-cores and is still relatively energy efficient.

The standard Apple M1 has 4 P-cores and 4 E-cores. When creating the M1 Pro, however, what did Apple do? They increased the P-cores to 8 and decreased the E-cores to 2. In other words:
  • M1 Pro has 100% more P-Cores
  • M1 Pro has 50% fewer E-Cores
When Intel decided to create Raptor Lake from Alder Lake, what did they do?
  • Just increased the number of E-cores
  • No change in P-core count
  • Higher IPC (instructions per clock)
Who gets all excited about E-cores?

AMD is using the entire silicon area for nothing but P-cores. For desktop this is great (IMHO), but for laptops AMD does have a "marketing" disadvantage.

macOS (x86 version, not ARM version) does not differentiate between P-cores and E-cores, so for Hackintosh I would rather choose a processor with all P-cores.
 

trs96

Moderator
Joined
Jul 30, 2012
Messages
22,477
Motherboard
GA-Z97X-UD3H-BK
CPU
i5-4690K
Graphics
HD4600 / RX 570
Mac
  1. MacBook Pro
  2. Mac mini
Mobile Phone
  1. Android
Are we arranging for 13900K tests?
As core counts and clock speeds go up so will the power draw and TDP. When will it hit a ceiling ? Where it's just not worth it to keep heading in this direction ? What continues to go up must come down eventually. If I have a 12900K what's the point of upgrading to a 13900K ? I know, we need progress but what about more efficiency ? Performance per watt. Intel has added efficiency cores but as CaseySJ points out, they're mostly useless in a hackintosh. "Who gets all excited about E-cores?" They really don't increase the overall CPU efficiency the same way increased performance per watt of the P cores would.

Screen Shot.jpg

The Intel Core i9-13900K CPU will offer a total of 24 cores and 32 threads. These are divided into an 8+16 configuration which includes 8 P-Cores based on the Raptor Cove and 16 E-Cores based on the Gracemont core architecture. All of these cores are fabricated on the 10nm ESF or 'Intel 7' process node. Given that the chip will feature more cores and cache using the same process node, the overall die size would be a bit bigger than Alder Lake's C0 desktop die.

Twenty four cores is impressive but when they are adding 8 more "Atom" E cores to reach that number, why bother ? The 13900K retail will probably be about $599. A lot for a CPU clock speed bump and 8 more E cores that really only add extra L2/L3 cache. Don't know what software might utilize those 8 E cores. Maybe Blender, FCP and Premiere Pro. How much would they speed up your workflow ? We'll have to see later in 2022.
 
Last edited:
Joined
Oct 28, 2017
Messages
64
Motherboard
Gigabyte x299x Designare 10G
CPU
i9 10940x
Graphics
2X AMD RX 6950XT
Mac
  1. iMac
  2. MacBook Pro
  3. Mac mini
Mobile Phone
  1. iOS
Hi I need a little help.

What I am doing wrong?

I added another AMD RX 6950XT.

I used @zirkaiva solution for my first AMD RX 6950 XT.
glxutil
a3:00.0 1002:66af
/PC01@0/BR1A@0/SL01@0/pci-bridge@0/GFX0@0 = PciRoot(0x1)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(A0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC01)#ACPI(BR1A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG0", 0x00000000)
{
    External (_SB_.PC01.BR1A.PEGP, DeviceObj)
    External (_SB_.PC01.BR1A.SL01, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC01.BR1A.PEGP)
    {
        Device (BRG0)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-1"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC01.BR1A.SL01._STA, Zero)  // _STA: Status
}

Now for the second GPU I duplicated and edited myself but I must be doing something wrong.

glxutil
c3:00.0 1002:73a5 /PC02@0/BR2A@0/SL05@0/pci-bridge@0/GFX0@0 = PciRoot(0x2)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(C0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC02)#ACPI(BR2A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

I am using this SSDT as SSDT-X299X-BRG1.aml

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG1", 0x00000000)
{
    External (_SB_.PC02.BR2A.SL02, DeviceObj)
    External (_SB_.PC02.BR2A.SL05, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC02.BR2A.SL05)
    {
        Device (BRG1)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-2"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC02.BR2A.SL05._STA, Zero)  // _STA: Status
}

The issue is that it does not detect the second GPU and when SLEEP the machine crashes and reboots.

Thanks.

p.s.: I changed SL02 to SL05 and viceversa. Testing now.

FIXED. I permuted SL05 with SL02 ... ... I am sorry.

Sorry. @Feartech you are right. I have been busy with Blender and Unreal ... my bad. Thanks!!!
 
Last edited:

Feartech

Moderator
Joined
Aug 3, 2013
Messages
26,974
Motherboard
Asus N752VX-OpenCore
CPU
i7-6700HQ / HM170
Graphics
HD 530 1920 x 1080
Mac
  1. iMac
Mobile Phone
  1. iOS
Hi I need a little help.

What I am doing wrong?

I added another AMD RX 6950XT.

I used @zirkaiva solution for my first AMD RX 6950 XT.
glxutil
a3:00.0 1002:66af
/PC01@0/BR1A@0/SL01@0/pci-bridge@0/GFX0@0 = PciRoot(0x1)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(A0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC01)#ACPI(BR1A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG0", 0x00000000)
{
    External (_SB_.PC01.BR1A.PEGP, DeviceObj)
    External (_SB_.PC01.BR1A.SL01, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC01.BR1A.PEGP)
    {
        Device (BRG0)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-1"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC01.BR1A.SL01._STA, Zero)  // _STA: Status
}

Now for the second GPU I duplicated and edited myself but I must be doing something wrong.

glxutil
c3:00.0 1002:73a5 /PC02@0/BR2A@0/SL05@0/pci-bridge@0/GFX0@0 = PciRoot(0x2)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)/Pci(0x0,0x0)

Windows ACPI
PCIROOT(C0)#PCI(0000)#PCI(0000)#PCI(0000)#PCI(0000)
ACPI(_SB_)#ACPI(PC02)#ACPI(BR2A)#ACPI(PEGP)#PCI(0000)#(PCI0000)

I am using this SSDT as SSDT-X299X-BRG1.aml

Code:
DefinitionBlock ("", "SSDT", 2, "ACDT", "BRG1", 0x00000000)
{
    External (_SB_.PC02.BR2A.SL02, DeviceObj)
    External (_SB_.PC02.BR2A.SL05, DeviceObj)
    External (DTGP, MethodObj)    // 5 Arguments

    Scope (\_SB.PC02.BR2A.SL05)
    {
        Device (BRG1)
        {
            Name (_ADR, Zero)  // _ADR: Address
            Device (GFX0)
            {
                Name (_ADR, Zero)  // _ADR: Address
                Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                {
                    Local0 = Package (0x08)
                        {
                            "AAPL,slot-name",
                            Buffer (0x07)
                            {
                                "Slot-2"
                            },

                            "model",
                            Buffer (0x13)
                            {
                                "AMD Radeon 6950 XT"
                            },

                            "device-id",
                            Buffer (0x04)
                            {
                                 0xBF, 0x73, 0x00, 0x00                           // .s..
                            },

                            "@0,AAPL,boot-display",
                            Buffer (Zero){}
                        }
                    DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                    Return (Local0)
                }
            }
        }
    }

    Name (_SB.PC02.BR2A.SL05._STA, Zero)  // _STA: Status
}

The issue is that it does not detect the second GPU and when SLEEP the machine crashes and reboots.

Thanks.
please update your hardware profile to allow others to help you easier
 
Joined
Apr 11, 2014
Messages
80
Motherboard
ASUS X299 SAGE/10G
CPU
i9-7980XE
Graphics
Radeon VII
Hi I need a little help.

What I am doing wrong?

I added another AMD RX 6950XT.

I've got the same issue but with Dual 6900 XT and Dual Radeon VII. Different motherboards, different systems, different OSs. The 6900 XT is on the ASUS Pro WS C621-64L SAGE/10G with Big Sur 11.6.7 and the Radeon VII in on Supermicro X12SPA-TF with Monterey.

On is detected and the second one is just Display. Been working on it for really long time still no luck.
 
Top