Contribute
Register

[Success] AMD RX6000 Series working in macOS

If you are going to compare systems at least try comparing two systems using the same GPU, i.e. a MacPro7,1 and your hack, both using an AMD Navi graphics card.

Don’t compare your hack with an Apple M1, or any other non-Intel Mac, as Apple Silicon (AS) systems use completely different processors and graphics. Plus Apple will have optimised their new AS systems to work with Davinci Pro.
 
I tested it on two Intel-based macOS, 2 Hackintosh with Intel 12th and RX 6900 XT, and a real Macbook Pro, and on the macbook the GPU usage is 100%.

Which leads me to understand that everyone is on the same version of macOS and Davinci Resolve, but only the Macbook actually uses the GPU to render...
 
I tested it on two Intel-based macOS, 2 Hackintosh with Intel 12th and RX 6900 XT, and a real Macbook Pro, and on the macbook the GPU usage is 100%.

Which leads me to understand that everyone is on the same version of macOS and Davinci Resolve, but only the Macbook actually uses the GPU to render...
What is your Macbook's GPU?
meanwhile ...... RX6900XT is a beast
So 10% of RX6900 XT could easily be 100% of laptop GPU

But I agree if RX6900 XT is only 10% usage during rendering, it mean the bottleneck is somewhere else: Writing speed Drive may be ? Playback Speed drive may be ?
What are the models of those drives ?
What is you rendering speed (fps) ?
 
I tested it on two Intel-based macOS, 2 Hackintosh with Intel 12th and RX 6900 XT, and a real Macbook Pro, and on the macbook the GPU usage is 100%.

Which leads me to understand that everyone is on the same version of macOS and Davinci Resolve, but only the Macbook actually uses the GPU to render...
I did tests of the same project with different graphics cards. I noticed that the weaker the graphics card, the higher the usage in Final Cut Pro.

1694031613211.png


1694031645476.png


1694031627454.png

And finally I fired up on an i7 8700 with a UHD630.

1694031812668.png
 
Whooops, I tested the same project here on Windows 11 with Davinci Resolve and the same thing, high CPU usage and very low GPU usage, that is, the same scenario I have on macOS... so you can even disregard that it's something on macOS/Hackintosh.

I don't have Final Cut to test, but I did a search and many people complain about AMD GPU with Davinci Resolve....

Test hardware: 12900K, Z690 Aorus Elite, 32Gb DDR4 4133Mhz and a NVME SN770 2Tb Gen4.

[edit]

I tested it with Final Cut Pro X, and the same thing... it uses 100% CPU and 10% maximum GPU.

This afternoon I tested adjustments with unfairgva (all values) and none of them had any effect... when rendering, either with davinci or with fcxp, the use is intense of CPU and little of GPU.
 
Last edited:
@ori69,

Interesting results from those tests! Have you tried them all also with and without RestrictEvents and/or CPUFriend?
 
I tested it on two Intel-based macOS, 2 Hackintosh with Intel 12th and RX 6900 XT, and a real Macbook Pro, and on the macbook the GPU usage is 100%.

Which leads me to understand that everyone is on the same version of macOS and Davinci Resolve, but only the Macbook actually uses the GPU to render...
Did you read this?
Your CPU:

Processor Graphics​

  • Processor Graphics ‡Intel® UHD Graphics 770
 
Hi all, as of now I'm using a RX 5700 XT which is supported by macOS OOB. I have not needed to do any modifications, make SSDT etc.

I will soon change to a 6900 XTXH variant and as I have understood I have two options: Flash BIOS or spoof device id to make it work. As I will use that card for windows gaming too, I want to spoof as to not risk lose any performance in windows. This method also seems easier.

However I am a bit confused if I really need to make SSDT or not. I have a working card today and as I understand the SSDT is needed to make macos understand where to find the GPU.

I use the latest version of opencore and my question is do I have to spoof the device-id to 0x73BF. Is this the only thing I need to do in my case?
 
Yes and no. You can spoof through Device Properties, but this requires a fully named ACPI path to the GPU and a basic "bridging" SSDT is generally required for that.
Alternatively, get a regular RX 6900 XT: This should work OOB.
 
Back
Top