Contribute
Register

Radeon Compatibility Guide - ATI/AMD Graphics Cards

5K ENABLED (with some limitations)​

the machine is that in the signature (Elektra).

Monitors:

GPU:


Modifications to kexts according to post #998 , I made the following:

  • The frame buffer personality used by ATI graphic card needs to use policies described by AGDP.kext. In my case, as I'm using an RX480/RX580 the correct kext is AMD9510Controller.kext. Open its info.plist and search for this var "CFG_USE_AGDC" , without quotes), and put it to true. In this kext there are two occurrences for this variable:
Code:
<key>CFG_USE_AGDC</key>
                    <true/>
but in other AMD kexts it could be different. So put attention at this aspect.​
  • the AGDP.kext contained in AGP.kext has been modified in its info.plist:
Code:
<key>Config1</key>
            <dict>
                <key>GFX1</key>
                <dict>
                    <key>unload</key>
                    <true/>
                </dict>
            </dict>
            <key>Custom</key>  //created a custom EMPTY section
            <dict>
            </dict>
            <key>Config2</key>
            <dict>
                <key>GFX0</key>
                <dict>
                    <key>EDID</key>
                    <dict>
                        <key>index</key>
                        <integer>0</integer>
                    </dict>
                    <key>FeatureControl</key>
                    <integer>12</integer>
                    <key>unload</key>
                    <false/>
                </dict>
            </dict>
            <key>Config3</key>
            <dict>
                <key>GFX0</key>
                <dict>
                    <key>EDID</key>
                    <dict>
                        <key>index</key>
                        <integer>0</integer>
                    </dict>
                    <key>FeatureControl</key>
                    <integer>12</integer>
                    <key>unload</key>
                    <false/>
                </dict>
                <key>IGPU</key>
                <dict>
                    <key>unload</key>
                    <true/>
                </dict>
            </dict>
            <key>ConfigMap</key>
            <dict>
                <key>Mac-00BE6ED71E35EB86</key>
                <string>none</string>
                <key>Mac-031B6874CF7F642A</key>
                <string>none</string>
                <key>Mac-27ADBB7B4CEE8E61</key>
                <string>none</string>
                <key>Mac-42FD25EABCABB274</key>
                <string>Config2</string>
                <key>Mac-4B7AC7E43945597E</key>
                <string>none</string>
                <key>Mac-65CE76090165799A</key>
                <string>Config2</string>
                <key>Mac-77EB7D7DAF985301</key>
                <string>none</string>
                <key>Mac-81E3E92DD6088272</key>
                <string>none</string>
                <key>Mac-B809C3757DA9BB8D</key>
                <string>Config2</string>
                <key>Mac-C3EC7CD22292981F</key>
                <string>none</string>
                <key>Mac-C9CF552659EA9913</key>
                <string>none</string>
                <key>Mac-DB15BD556843C820</key> //my board ID points to Custom
                <string>Custom</string>
                <key>Mac-F221BEC8</key>
                <string>none</string>
                <key>Mac-F221DCC8</key>
                <string>none</string>
                <key>Mac-F42C88C8</key>
                <string>none</string>
                <key>Mac-F60DEB81FF30ACF6</key>
                <string>Config1</string>
                <key>Mac-FA842E06C61E91C5</key>
                <string>Config2</string>
                <key>Mac-FC02E91DDD3FA6A4</key>
                <string>none</string>
            </dict>
            <key>Default</key> //default must point to Custom as well
            <dict>
                <key>Custom</key>
                <string></string>
            </dict>
Remember that all the previous modifications to have RX480/RX580 correctly working with acceleration must remain in place. Including Clover Kext to Patches section, if any.

At this point install both the modified kexts in S/L/E (backup the originals in some safe folder before), rebuild the cache and restart the hack.

Et voilà! HP Z27Q is working 5K and it's magnificent!!! :)
And it works with whatever combinations of my three DP on my GPU. (1-3), (2-3), (1-2) all works just fine.


412dqrVGi4L.jpg

As you can see I still have 1 DP, 1 HDMI, and 1 DVI available. I tried to connect my 2K BenQ monitor to any port available and it doesn't work anymore. It works only if connected to the integrated HD530 (which I don't want to because in this way I loose the sleep/wake functionality).

I revert to my classical configuration: one 4K monitor and one 2K monitor and it's the same story even without 5K: only one monitor allowed from RX480/RX580 at once with these kexts modifications.
So my preliminary conclusion is that in this way no matter if MST or SST but only ONE monitor is allowed to work with GFX0.


Fl0r!an
LostVector

Do you have any possible idea to have my multi monitor configurations back still keeping the 5K working? I'm open to experimenting with suggested kext modifications. Thank you.
 
Last edited:
I think this is probably an issue with your specific card configuration and macOS. I'm using a FirePro W7000 which is about as close to the MacPro 6,1 cards as you can get (I chose it specifically for that reason, even tho the chip architecture is a little out of date) ... and I'm driving two 5k monitors off its four displayports just fine. The tricky part for me is the fact that I'm only ever able to drive MST / 5K monitors off one of the ATI cards at any time (I have been testing with a 7970 and the W7000 and three Dell UP2715K's).
 
More 5K/MST information:

displaypolicyd is responsible for making MST/5K support work. It requires AGDC to even run, which is why AGDC must be enabled somehow in the controller kext. If you run a "strings" command on it (in /usr/libexec) you will see tons and tons of MST related stuff. However, I have not discovered what any of the switches or other options in there mean. launchd runs displaypolicyd with a "-k 1" param. No idea what that does either.

There seem to be tons of assumptions in Apple's code that you will only have one ATI card present.

1. If you try to run two different ATI cards with the auto detected RadeonFramebuffer, the second card will malfunction because it tries to use the first card's framebuffer on the second one. The way around this is presumably to add and patch specific framebuffers with SSDT's, etc. But this is very difficult because the patching features in Clover are oriented around a single card as well.

2. displaypolicyd only ever seems to load against one video card. As I've mentioned before, I've never gotten MST / 5K support to work on multiple cards.

Also, using two different video cards creates a once per second stutter on my system. I'm not sure what the source of this is, but the only common pattern is using more than one video card at a time. As soon as you turn off all the monitors on any extra video cards, the problem goes away. It's subtle and doesn't manifest in normal work, but watch a video with it once and you'll never unsee it.

So ... the only viable way to do serious multi monitor and especially multi MST monitors is to buy an ATI card with as many ports as you can get on it. I'd really love to test with a FirePro W9000 as I have every expectation it would drive all of my 5K monitors at once, but it's not a cheap card and Vega + modular Mac Pro is just around the corner ... seems like the runway for it wouldn't be that long.
 
Last edited:
I think this is probably an issue with your specific card configuration and macOS. I'm using a FirePro W7000 which is about as close to the MacPro 6,1 cards as you can get (I chose it specifically for that reason, even tho the chip architecture is a little out of date) ... and I'm driving two 5k monitors off its four displayports just fine. The tricky part for me is the fact that I'm only ever able to drive MST / 5K monitors off one of the ATI cards at any time (I have been testing with a 7970 and the W7000 and three Dell UP2715K's).

Probably you might be right because a deeper analysis leads me to this preliminary conclusion:

  • As I'm using an RX480/RX580 the correct kext is AMD9510Controller.kext. In its info.plist the occurrences of "CFG_USE_AGDC" var are two. What I discover is that the first one is normally set to false, the second one is already set to true. If you put the 1st one to true you got the MST/5K functionality and if you revert it to false you lose it, no matter what you do the AGDP.kext contained in AGP.kext. The second occurrence of that var is not influential for getting 5K.
  • The amount of modifications in the AGDP.kext seems to be irrelevant. Infact I tried to repopulate the Custom section adding at least the GFX0 key, just to be sure it points to the wanted GPU but it changes nothing. I.e. :

Code:
            <key>Custom</key>
            <dict>
                <key>GFX0</key>
                <dict>
                    <key>EDID</key>
                    <dict>
                        <key>index</key>
                        <integer>0</integer>
                    </dict>
            </dict>
</dict>

it won't lead to any changes in the multi-monitor behaviour. But 5K/MST will be still in place.


My hope is that, since 12.6 or even High Sierra, would have an OOB support for RX480/580 with dedicated kexts, other than AMD9510 and different frame buffers than Baffin (which is used one), the situation can improve and we can have multi-monitor support back in place. As for my understanding till 12.5 the support of RX480 has been infact based upon the fact RX460 cards are OOB native in MacOS because of their MacBookPro presences. But RX460 it has 1DP only in its reference design. Of course this is just a supposition!
 
As I'm using an RX480/RX580 the correct kext is AMD9510Controller.kext. In its info.plist the occurrences of "CFG_USE_AGDC" var are two. What I discover is that the first one is normally set to false, the second one is already set to true. If you put the 1st one to true you got the MST/5K functionality and if you revert it to false you lose it, no matter what you do the AGDP.kext contained in AGP.kext. The second occurrence of that var is not influential for getting 5K.
That's very odd because the first instance of CFG_USE_AGDC is inside the Berbice framebuffer. Which means, of course, that your card is using the Berbice framebuffer. It's possible the framebuffer usage is hardcoded into that particular kext, or you've injected/setup framebuffers in such a way that the auto-detecting RadeonFramebuffer is not being used.
 
That's very odd because the first instance of CFG_USE_AGDC is inside the Berbice framebuffer. Which means, of course, that your card is using the Berbice framebuffer. It's possible the framebuffer usage is hardcoded into that particular kext, or you've injected/setup framebuffers in such a way that the auto-detecting RadeonFramebuffer is not being used.

RX480/580 are loaded thanks to another kext, which is AMDRadeonX4100.kext. But inside this one there is no instance of CFG_USE_AGDC var. But inside the info.plist of this kext there are references to Baffin acceleration and as matter of fact my card has been seen as a "Baffin prototype", that's to say Baffin unknown model.
 
Last edited:
RX480/580 are loaded thanks to another kext, which is AMDRadeonX4100.kext. But inside this one there is no instance of CFG_USE_AGDC var. But inside the info.plist of this kext there are references to Baffin acceleration and as matter of fact my card has been seen as a "Baffin prototype", that's to say Baffin unknown model.
I suspect you need to patch the Berbice framebuffer so that it matches your card. Or else inject ATI with the RadeonFramebuffer in Clover.
 
FWIW, while on 10.12.5 with an MSI RX460 2GB, I got a considerably lower Geekbench OpenCL Compute score when using the Berbice frame buffer (with appropriate patch applied) vs using the stock drivers. Scored 56,000 using the framebuffer vs 72,000 without it.

Not sure if this is expected or not, but I decided to stick with the stock solution and figured I'd post this info here for others.
 
@Fl0r!an
Hi man,
I have a question and forgive me if it is asked before, but regarding the old R9 270X and R9 280X. Do they work without the need of a helper card to boot into macOS Sierra and macOS High Sierra? I'm about to buy one of those and I just wanted to make sure I don't get the black screen in macOS.

Thanks.
 
Back
Top