Contribute
Register

[Success] AMD RX6000 Series working in macOS

Nice. Last time I flashed a Radeon card was the 560, to move from XFX bios to Sapphire, so macOS could boot and not immediately panic. Interesting that linux is needed to complete the flash.
something I hope is just a matter of the windows tools not being up to snuff yet for the RNDA2 refresh.

more time may simplify process when said tools update.
 
Nice. Last time I flashed a Radeon card was the 560, to move from XFX bios to Sapphire, so macOS could boot and not immediately panic. Interesting that linux is needed to complete the flash.
It might not be needed, but at least in my testing I was unable to get ATIFlash to enumerate the GPU on Windows, while on Linux it did. The Windows versions are also lacking the "force" flag which is required to flash a VBIOS (in my case it's technically an older version, so we need to force it to flash).

The ideal world would be IgorsLab's ATIFlash version working for the XTXH variants, since that would allow us to flash arbitrary VBIOS's, unlocking the full performance of the ultimate cards.

@byteminer,

This deserves to be indexed somewhere so I've added it here:

View attachment 518520
Leaving a mark, I see :)
Thanks for indexing it, that'll make things easier to find in the future.
 
Grats all! Glad to know I can keep my badass Sapphire card. All I need is to flash the step down 6900XT nitro+, eh? Looks like the Toxic would be fine too. Is that one a tier above the nitro+?
 
Last edited:
hi guys. I'm near to buy a 6900 xt for 1300 euro, reference model.i have a 10900k with z490. it's better to use the Mac Pro 7,1 sysdef leaving all the stuff to the 6900 xt or is better to stay with iMac 19/20 sysdef and use quicksync? a part from drm advantages, I want to know if, for encoding and transcoding, is better the intel quicksync engine or the Big Navi engine.
thanks for your support
 
Sapphire Pulse 6800 XT working perfectly!

Geekbench Metal: 147257

Redshift 3.0.45 (macOS)
CPU: 16 threads, 3.60 GHz, 128.00 GB
GPU(s): [AMD Radeon RX 6800 XT 16 GB 0.073ms]
Time: 00h:05m:26s
****which is like 1.5x 1080ti

my opinion - impressive !
 
Sapphire Pulse 6800 XT working perfectly!

Geekbench Metal: 147257

Redshift 3.0.45 (macOS)
CPU: 16 threads, 3.60 GHz, 128.00 GB
GPU(s): [AMD Radeon RX 6800 XT 16 GB 0.073ms]
Time: 00h:05m:26s
****which is like 1.5x 1080ti

my opinion - impressive !
Yeah it is super impressive considering Redshift metal has just come out of beta.

My 6900xt scored 05:16m however other posters got 04:50m. I did a VFX job in Redshift last week and it was really stable too so they did a great job.
 
This post is about how Kernel patches can affect GPU performance, specifically a PowerColor Red Devil 6900XT on a 3970X Threadripper TRX40 mobo. If you're Intel based, the results may be interesting, but the patches are of no importance. And while the results are accurate for a TRX40 mobo, they may or may not apply to non-TRX40 AMD mobos (I have none to test).

There is one Kernel patch, which some of us who were working on the TRX40 mobo on an Italian forum a year ago, found to adversely impact GPU performance [most of the work done by fabiosun, Pavo and myself (iGPU over there)]. This was noticed when we made the shift from VM, the only initial method of booting macOS on a TRX40 mobo, to bare metal. VM GPU performance was initially better. On removal of this patch, GPU performance on bare metal was the same as on VM.

This patch was recently modified and re-introduced on another forum by Shaneee. I didn't notice any significant affect on most tests, but its adverse affect became noticeable when testing the 6900XT, esp in Davinci Resolve. The Kernel patch is shown in Spoiler below.

1621168355571.png

When running the Octane X test (as discussed earlier in this thread here), I saw a 7 sec result with the 6900XT in the 3970X system with or without the patch. This is nearly a 4 sec improvement, but may be due to mobo or platform differences since TheBloke used an X299 system with the 6900XT. And while the 7 sec result is really good, the point here is that the Kernel patch had no influence on the result.

1621169530368.png


1621169548796.png

However, I use the computer to work with Davinci Resolve. A good GPU test is the Davinci Candle benchmark (a link to the Italian forum for this test, if allowed, can be added later). The results below are using the Metal setting in Davinci Resolve 17.2 (OpenCL results are poorer).

The 6900XT results were: 18 fps @ 66 nodes and 23 fps @ 6 nodes (6TNR).

As a reference, this is roughly the same as one Radeon VII in macOS on the TRX40, or, one 2080 Ti under Win10. Not very impressive for an expensive 6900XT.

However, when the patch is disabled, the results were better: 30 fps @ 66 nodes and 40 fps @ 6 nodes (6TNR).

This is nearly a 2-fold improvement, and is similar to what I saw when using two Radeon VIIs in the TRX40 machine (also without the patch, shown in Spoiler below).

Again, as a reference, a real MacPro 2019 with (two) Pro Vega 2 Duo gets 36 fps for 6TNR, and one RTX 3090 under Linux reportedly got 40 fps at 6TNR. So the results of one 6900XT under Big Sur on the TRX40 are very good and are the equivalent of two Radeon VIIs, but the 'fix PAT' Kernel Patch must not be used.

The values shown below are from dual Radeon VIIs, set to Metal, for 66 nodes.

1621169602428.png

The Davinci Resolve Candle test is one of the more real world yardsticks for GPU performance (and Resolve can utilize multiple GPUs). But, it is decidedly influenced by Kernel patches on AMD CPUs, at least on the TRX40 platform.
 
Last edited:
This post is about how Kernel patches can affect GPU performance, specifically a PowerColor Red Devil 6900XT on a 3970X Threadripper TRX40 mobo. If you're Intel based, the results may be interesting, but the patches are of no importance. And while the results are accurate for a TRX40 mobo, they may or may not apply to non-TRX40 AMD mobos (I have none to test).

There is one Kernel patch, which some of us who were working on the TRX40 mobo on an Italian forum a year ago, found to adversely impact GPU performance [most of the work done by fabiosun, Pavo and myself (iGPU over there)]. This was noticed when we made the shift from VM, the only initial method of booting macOS on a TRX40 mobo, to bare metal. VM GPU performance was initially better. On removal of this patch, GPU performance on bare metal was the same as on VM.

This patch was recently modified and re-introduced on another forum by Shaneee. I didn't notice any significant affect on most tests, but its adverse affect became noticeable when testing the 6900XT, esp in Davinci Resolve. The Kernel patch is shown in Spoiler below.


When running the Octane X test (as discussed earlier in this thread here), I saw a 7 sec result with the 6900XT in the 3970X system with or without the patch. This is nearly a 4 sec improvement, but may be due to mobo or platform differences since TheBloke used an X299 system with the 6900XT. And while the 7 sec result is really good, the point here is that the Kernel patch had no influence on the result.


However, I use the computer to work with Davinci Resolve. A good GPU test is the Davinci Candle benchmark (a link to the Italian forum for this test, if allowed, can be added later). The results below are using the Metal setting in Davinci Resolve 17.2 (OpenCL results are poorer).

The 6900XT results were: 18 fps @ 66 nodes and 23 fps @ 6 nodes (6TNR).

As a reference, this is roughly the same as one Radeon VII in macOS on the TRX40, or, one 2080 Ti under Win10. Not very impressive for an expensive 6900XT.

However, when the patch is disabled, the results were better: 30 fps @ 66 nodes and 40 fps @ 6 nodes (6TNR).

This is nearly a 2-fold improvement, and is similar to what I saw when using two Radeon VIIs in the TRX40 machine (also without the patch, shown in Spoiler below). Again as a reference, a real MacPro 2019 with (two) Pro Vega 2 Duo gets 36 fps for 6TNR, and one RTX 3090 under Linux reportedly got 40 fps at 6TNR. So the results of one 6900XT under Big Sur on the TRX40 are very good and are the equivalent of two Radeon VIIs, but the 'fix PAT' Kernel Patch must not be used.

The values shown below are from dual Radeon VIIs, set to Metal, for 66 nodes.

View attachment 518738

The Davinci Resolve Candle test is one of the more real world yardsticks for GPU performance (and Resolve can utilize multiple GPUs). But, it is heavily influenced by Kernel patches on AMD CPUs, at least on the TRX40 platform.
TRX40 users do not need any of the 2 Fix PAT kernel patches. There is actually quite a few kernel patches that are outlined for AMD users that TRX40 users do not need. Ryzen users on the other hand must use them. One main take away from all the testing I have done on TRX40 and regular Ryzen systems is... If there was someway to have OpenCore advertise to macOS that the CPU was a "GenuineIntel" CPU as the vendor-id, then the only kernel patch needed is the topology kernel patch. When using macOS in a KVM+Qemu VM setting, even when using host passthrough for the CPU, that's the only kernel patch needed.
 
TRX40 users do not need any of the 2 Fix PAT kernel patches. There is actually quite a few kernel patches that are outlined for AMD users that TRX40 users do not need. Ryzen users on the other hand must use them. One main take away from all the testing I have done on TRX40 and regular Ryzen systems is... If there was someway to have OpenCore advertise to macOS that the CPU was a "GenuineIntel" CPU as the vendor-id, then the only kernel patch needed is the topology kernel patch. When using macOS in a KVM+Qemu VM setting, even when using host passthrough for the CPU, that's the only kernel patch needed.
It would be nice to break out the Threadripper patches and separate them like with the FX patches as a download.
@Pavo have you tested with a DSDT to rename the device and or vendor?
check this out.

 
Back
Top