Contribute
Register

[Success] Radeon RX 6800 XT - Big Sur

Joined
Oct 24, 2013
Messages
845
Motherboard
Gigabyte Z590 Vision D
CPU
i7-11700K OC @ 5.2GHz
Graphics
RX 6800 XT
Mac
  1. iMac
  2. MacBook
  3. MacBook Pro
Mobile Phone
  1. iOS
It looks like the new drivers disable the USB ports. It shows up with an AppleAMDUSBXHCIPCI entry, but with no ports:
View attachment 516103

And whereas in earlier Big Sur versions you could plug in a USB stick or other device into the USB-C port and it'd work, and sleep was broken even with the GPU disabled unless the controller was disabled, neither is true now.
Very interesting, that would explain why my usb devices connected to my dock didn't work in macOS when I plugged it in to the USB-C port, but the same devices worked in windows. I just did a quick check and didn't look at the Ioregistry. But while the port is disabled for usb connectivity at the moment, displayport tunneling surely is working in macOS and it works well.

Seems like Apple is still doing some enablement of the AppleAMDUSBXHCIPCI class... and this is an early beta after all... I wonder if until they fully enable this new USB driver for the radeon cards, if in the meantime we can find a way to allow com.apple.driver.usb.AppleUSBXHCIPCI to load? Don't know if it would work, but just a suggestion.
 
Joined
Mar 6, 2013
Messages
273
Motherboard
Gigabyte X299X Designare 10G
CPU
i9-10980XE
Graphics
AMD 6900XT
Mobile Phone
  1. Android
Very interesting, that would explain why my usb devices connected to my dock didn't work in macOS when I plugged it in to that port, but the same devices worked in windows while connected to the usb-c port of the 6800xt. I didn't check the ioregistry. But while the port is disabled for usb connectivity at the moment, displayport tunneling surely is working in macOS.

Seems like Apple is still doing some enablement of the AppleAMDUSBXHCIPCI class... and this is an early beta after all... I wonder if until they fully enable this new USB driver for the radeon card, if in the meantime we can find a way to allow com.apple.driver.usb.AppleUSBXHCIPCI to load? Don't know if it would work, but just a suggestion.
A page or two back in this thread I replied to your messages of yesterday talking about TB4 docks - if you missed that could you take a look? I'd love to know more about what you've tested with regards to DP tunnelling.

In particular, I'm interested to know if there's any way to get two displays from that one USB-C port (extended, not mirrored.)

Thanks
 
Joined
Oct 24, 2013
Messages
845
Motherboard
Gigabyte Z590 Vision D
CPU
i7-11700K OC @ 5.2GHz
Graphics
RX 6800 XT
Mac
  1. iMac
  2. MacBook
  3. MacBook Pro
Mobile Phone
  1. iOS
A page or two back in this thread I replied to your messages of yesterday talking about TB4 docks - if you missed that could you take a look? I'd love to know more about what you've tested with regards to DP tunnelling.

In particular, I'm interested to know if there's any way to get two displays from that one USB-C port (extended, not mirrored.)

Thanks
Hello there, at least one display works with my dock connected to this USB-C port. A 1440P @ 144Hz monitor. I haven't tried connecting two monitors to see whether they both work in macOS big sur, or if only one works. But even if both don't work with the dock, there are still available displayport and hdmi outputs on the card. I also didn't test to see whether sound is also being routed over the USB-c port to the monitor.

I can say that this same thunderbolt4 dock works to drive dual monitors displaying separate images when connected to my macbook. Not sure if the same can happen using the 6800xt's usb-c port in macOS. And any conclusions we draw are seemingly based on an early version of the driver for a "usb" port that doesn't even support usb devices at the moment. A half-baked USB-c port at the moment in macOS, lol. So our assumptions/conclusions on 11.4 beta1 are subject to change.
 
Last edited:
Joined
Mar 28, 2019
Messages
131
Motherboard
Gigabyte X299X Designare 10G
CPU
i9-10980XE
Graphics
RX 580
Mac
  1. MacBook Pro
Mobile Phone
  1. Android
Alright, it appears that I will likely have to spoof my GPU ID because my 6900XT is the pre-binned version, which has the original ID 37AF instead of 37BF.
Unfortunately I know very little about ACPI-related things. While the Dortania guide has been helpful as always, I'm struggling a bit with figuring out the exact ACPI path of my GPU device because there is a pci-bridge element in the tree, and I have no clue how to model that in my SSDT.

If somebody could point me to some resources that elaborate on how to model these kinds of chains, that would be very much appreciated.

I got as far as this before my wits ran out.

Code:
DefinitionBlock ("", "SSDT", 2, "DRTNIA", "AMDGPU", 0x00001000)
{
    External (_SB.PC02, DeviceObj)
    External (_SB.PC02.BR2A.SL05, DeviceObj)

    Scope (\_SB.PC02.BR2A.SL05)
    {
        if (_OSI ("Darwin"))
        {
            Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
            {
                Local0 = Package (0x04)
                {
                    // Where we shove our FakeID
                    "device-id",
                    Buffer (0x04)
                    {
                        0xBF, 0x73, 0x00, 0x00
                    },

                    // Changing the name of the GPU reported, mainly cosmetic
                    "model",
                    Buffer ()
                    {
                        "AMD Radeon 6900 XT"
                    }
                }
                DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                Return (Local0)
            }
        }
    }
    Scope (\_SB.PC02)
    {
        Method (DTGP, 5, NotSerialized)
        {
            If (LEqual (Arg0, ToUUID ("a0b5b7c6-1318-441c-b0c9-fe695eaf949b")))
            {
                If (LEqual (Arg1, One))
                {
                    If (LEqual (Arg2, Zero))
                    {
                        Store (Buffer (One)
                            {
                                 0x03
                            }, Arg4)
                        Return (One)
                    }

                    If (LEqual (Arg2, One))
                    {
                        Return (One)
                    }
                }
            }

            Store (Buffer (One)
                {
                     0x00
                }, Arg4)
            Return (Zero)
        }

    }

}

I'm attaching a shoddy phone picture for the time being until I get a full ioreg dump going.

Edit: Attached IOReg dump.
 

Attachments

  • screenshot.jpg
    screenshot.jpg
    2.5 MB · Views: 58
  • Mac Pro.ioreg
    9.2 MB · Views: 32
Last edited:
Joined
Jul 22, 2019
Messages
29
Motherboard
<< need only 1 model # >> See Forum Rules !!
CPU
<< need only 1 model # >> See Forum Rules !!
Graphics
<< need model # >> See Forum Rules !!
Mobile Phone
  1. Android
  2. iOS
Yes, I have 5900x GA x570 with RX 6800 and it’s fine.
I am trying OpenCore but it only shows 11.2.3 so did you had installed this before and then upgraded via system upgrade to beta ? if not then how to download the 11.4 beta 1 directly ? I tried with gibMacOS-master, downloaded the version but got some warning related translation file missing at the time of copying it to usb :(
 
Joined
Nov 5, 2010
Messages
502
Motherboard
MSI TRX40 Creator
CPU
AMD Threadripper 3970X
Graphics
Sapphire Nitro+ RX 6900 XT
Mac
  1. Mac Pro
Mobile Phone
  1. iOS
I think most of us only have PCIe 3.0, except for those with AMD systems. But yes I think most of us have a faster CPU.

Your Blackmagic Raw Speed Test result is actually really interesting - only 42 FPS despite you having the same GPU as me, the 6900XT.

@oreoapple I think this is more evidence that the BRST benchmark is bottlenecked. Maybe that 125 FPS limit we both see with our GPUs is CPU or RAM or PCIe or just code limited. If it was a true GPU benchmark, Cyberneticist should be getting a very similar result to ours, but they're getting 1/3 the speed.

So I'm happy to believe that BRST is not representative of the GPU. The question remaining is whether it's representative of Resolve Studio. I hope not! It does only test a very specific thing though, and in a real life scenario the user will likely have multiple GPU operations going on simultaneously (decode/debayer + noise reduction + VFX + etc) and hopefully this will enable more full use of the GPU.
Sapphire Nitro+ RX 6900 XT
 

Attachments

  • Screen Shot 2021-04-23 at 8.04.11 PM.png
    Screen Shot 2021-04-23 at 8.04.11 PM.png
    660.6 KB · Views: 108
Joined
Apr 18, 2011
Messages
1,372
Motherboard
ASUS ROG Rampage VI Extreme Encore
CPU
i9-7900X
Graphics
Radeon Pro W5500
Mac
  1. MacBook Air
  2. Mac Pro
Mobile Phone
  1. iOS
Alright, it appears that I will likely have to spoof my GPU ID because my 6900XT is the pre-binned version, which has the original ID 37AF instead of 37BF.
Unfortunately I know very little about ACPI-related things. While the Dortania guide has been helpful as always, I'm struggling a bit with figuring out the exact ACPI path of my GPU device because there is a pci-bridge element in the tree, and I have no clue how to model that in my SSDT.

If somebody could point me to some resources that elaborate on how to model these kinds of chains, that would be very much appreciated.

I got as far as this before my wits ran out.

Code:
DefinitionBlock ("", "SSDT", 2, "DRTNIA", "AMDGPU", 0x00001000)
{
    External (_SB.PC02, DeviceObj)
    External (_SB.PC02.BR2A.SL05, DeviceObj)

    Scope (\_SB.PC02.BR2A.SL05)
    {
        if (_OSI ("Darwin"))
        {
            Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
            {
                Local0 = Package (0x04)
                {
                    // Where we shove our FakeID
                    "device-id",
                    Buffer (0x04)
                    {
                        0xBF, 0x73, 0x00, 0x00
                    },

                    // Changing the name of the GPU reported, mainly cosmetic
                    "model",
                    Buffer ()
                    {
                        "AMD Radeon 6900 XT"
                    }
                }
                DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                Return (Local0)
            }
        }
    }
    Scope (\_SB.PC02)
    {
        Method (DTGP, 5, NotSerialized)
        {
            If (LEqual (Arg0, ToUUID ("a0b5b7c6-1318-441c-b0c9-fe695eaf949b")))
            {
                If (LEqual (Arg1, One))
                {
                    If (LEqual (Arg2, Zero))
                    {
                        Store (Buffer (One)
                            {
                                 0x03
                            }, Arg4)
                        Return (One)
                    }

                    If (LEqual (Arg2, One))
                    {
                        Return (One)
                    }
                }
            }

            Store (Buffer (One)
                {
                     0x00
                }, Arg4)
            Return (Zero)
        }

    }

}

I'm attaching a shoddy phone picture for the time being until I get a full ioreg dump going.

Edit: Attached IOReg dump.

I believe you can adjust via device properties too. Use Hackintool to grab the path

Not at my comp so might be a little off but should be something like this

Code:
DefinitionBlock ("", "SSDT", 2, "DRTNIA", "AMDGPU", 0x00001000)
{
    External (_SB_.PC02.BR2A, DeviceObj)
    External (_SB_.PC02.BR2A.PEGP, DeviceObj)
    External (_SB_.PC02.BR2A.SL05, DeviceObj)

    Scope (\_SB.PC02.BR2A)
    {
        If (_OSI ("Darwin"))
        {
            Scope (SL05)
            {
                Name (_STA, Zero)  // _STA: Status
            }

            Scope (PEGP)
            {
                Device (BRG0)
                {
                    Name (_ADR, Zero)  // _ADR: Address
                    Device (GFX0)
                    {
                        Name (_ADR, Zero)  // _ADR: Address
                        Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
                        {
                            Local0 = Package (0x04)
                                {
                                    "device-id",
                                    Buffer (0x04)
                                    {
                                         0xBF, 0x73, 0x00, 0x00                           // .s..
                                    },

                                    "model",
                                    Buffer (0x13)
                                    {
                                        "AMD Radeon 6900 XT"
                                    }
                                }
                            DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                            Return (Local0)
                        }
                    }
                }
            }
        }
    }

    Method (DTGP, 5, NotSerialized)
    {
        If ((Arg0 == ToUUID ("a0b5b7c6-1318-441c-b0c9-fe695eaf949b") /* Unknown UUID */))
        {
            If ((Arg1 == One))
            {
                If ((Arg2 == Zero))
                {
                    Arg4 = Buffer (One)
                        {
                             0x03                                             // .
                        }
                    Return (One)
                }

                If ((Arg2 == One))
                {
                    Return (One)
                }
            }
        }

        Arg4 = Buffer (One)
            {
                 0x00                                             // .
            }
        Return (Zero)
    }
}
 
Last edited:

CaseySJ

Moderator
Joined
Nov 11, 2018
Messages
17,985
Motherboard
Asus Z690 ProArt Creator
CPU
i7-12700K
Graphics
RX 6800 XT
Mac
  1. MacBook Air
  2. MacBook Pro
  3. Mac Pro
Classic Mac
  1. Quadra
Mobile Phone
  1. iOS
Alright, it appears that I will likely have to spoof my GPU ID because my 6900XT is the pre-binned version, which has the original ID 37AF instead of 37BF.
Unfortunately I know very little about ACPI-related things. While the Dortania guide has been helpful as always, I'm struggling a bit with figuring out the exact ACPI path of my GPU device because there is a pci-bridge element in the tree, and I have no clue how to model that in my SSDT.

If somebody could point me to some resources that elaborate on how to model these kinds of chains, that would be very much appreciated.

I got as far as this before my wits ran out.

Code:
DefinitionBlock ("", "SSDT", 2, "DRTNIA", "AMDGPU", 0x00001000)
{
    External (_SB.PC02, DeviceObj)
    External (_SB.PC02.BR2A.SL05, DeviceObj)

    Scope (\_SB.PC02.BR2A.SL05)
    {
        if (_OSI ("Darwin"))
        {
            Method (_DSM, 4, NotSerialized)  // _DSM: Device-Specific Method
            {
                Local0 = Package (0x04)
                {
                    // Where we shove our FakeID
                    "device-id",
                    Buffer (0x04)
                    {
                        0xBF, 0x73, 0x00, 0x00
                    },

                    // Changing the name of the GPU reported, mainly cosmetic
                    "model",
                    Buffer ()
                    {
                        "AMD Radeon 6900 XT"
                    }
                }
                DTGP (Arg0, Arg1, Arg2, Arg3, RefOf (Local0))
                Return (Local0)
            }
        }
    }
    Scope (\_SB.PC02)
    {
        Method (DTGP, 5, NotSerialized)
        {
            If (LEqual (Arg0, ToUUID ("a0b5b7c6-1318-441c-b0c9-fe695eaf949b")))
            {
                If (LEqual (Arg1, One))
                {
                    If (LEqual (Arg2, Zero))
                    {
                        Store (Buffer (One)
                            {
                                 0x03
                            }, Arg4)
                        Return (One)
                    }

                    If (LEqual (Arg2, One))
                    {
                        Return (One)
                    }
                }
            }

            Store (Buffer (One)
                {
                     0x00
                }, Arg4)
            Return (Zero)
        }

    }

}

I'm attaching a shoddy phone picture for the time being until I get a full ioreg dump going.

Edit: Attached IOReg dump.
Your RX 6900XT is at the following PCI path:

_SB.PC02.BR2A.SL05.PCI-Bridge@0.GFX0

But only first first 4 parts of the path are defined in the DSDT. The two parts in bold must be created by the SSDT in order to reach the correct final endpoint (the GPU). Only then can we change the Device ID.

The process is described in this post:
I've made the necessary changes and attached both the AML and DSL files. The DSL file should have all the comments intact.
 

Attachments

  • SSDT-Change-GPU-Device-ID.dsl
    1.8 KB · Views: 129
  • SSDT-Change-GPU-Device-ID.aml
    215 bytes · Views: 120
Joined
Jan 31, 2021
Messages
13
Motherboard
Gigabyte z490 Vision D
CPU
10850K
Graphics
MSI 6800 XT Gaming X Trio
Finally, it's alive! MSI 6800xt Gaming X Trio 3xDisplayport 1x HDMI Running triple 4k.

Few bits of strange behaviour, seems like OSX is hogging a lot of memory compared to when I was using the 5700xt and it goes to full memory utilisation when I load logic. Oddly I can no longer boot when monitors plugged into the 5700xt, but I'll be removing that shortly so I get full 16xPCI on the 6800xt.

Overall seems a success,
1619228458733.png
 
Top