Contribute
Register

How to Enable AMD GPU Encoding instead of Intel's?

Status
Not open for further replies.
Joined
Jan 10, 2019
Messages
51
Motherboard
Gigabyte Z370XP SLI
CPU
i7-8700K
Graphics
RX 570
Mac
  1. iMac
  2. MacBook Air
  3. MacBook Pro
Classic Mac
  1. SE
Mobile Phone
  1. iOS
My machine has 2 GPUs: AMD Radeon 570 PCIe card and an integrated Intel UHD 630. Both are enabled in the BIOS and the AMD one is set as the primary display card. The HDMI cable is attached to the AMD card as well.

Apple's apps, such as FCPX and Compressor, use the AMD GPU by default. Other video editing/encoding apps, such as VideoProc, use only the Intel GPU.

I disabled the Intel GPU in the BIOS in order to force all apps to use the AMD one. The Apple apps continue to use the AMD GPU in this case, but the other editors/encoders couldn't detect the AMD GPU!

Is there a Clover setting or a kext to enable the AMD GPU encoding systemwide?

Thank you.

EDIT: here are some screenshots after disabling Intel IGFX in the BIOS:
 

Attachments

  • Screen Shot 2019-05-17 at 10.47.58 AM.png
    Screen Shot 2019-05-17 at 10.47.58 AM.png
    122.8 KB · Views: 955
  • Screen Shot 2019-05-17 at 10.49.09 AM.png
    Screen Shot 2019-05-17 at 10.49.09 AM.png
    85.3 KB · Views: 1,014
  • Screen Shot 2019-05-17 at 10.44.23 AM.png
    Screen Shot 2019-05-17 at 10.44.23 AM.png
    85.1 KB · Views: 1,161
  • Screen Shot 2019-05-17 at 10.44.06 AM.png
    Screen Shot 2019-05-17 at 10.44.06 AM.png
    78.6 KB · Views: 943
  • Screen Shot 2019-05-17 at 10.47.46 AM.png
    Screen Shot 2019-05-17 at 10.47.46 AM.png
    137.5 KB · Views: 922
Last edited:
When MacOS is run on a system using a SMBIOS where the CPU has a IGPU then MacOS will always try and use Intel Quick Sync (IQS) for some encode and decode tasks, like when using Finders preview feature, it also uses the IGPU as a sort of GPU co-processor for things like Airplay and the Metal 2 compute API (where MacOS will use both IGPU and dGPU depending on load) .... so you should really keep your IGPU enabled in the BIOS.

IQS is incredibly efficient at H264/H265 encode/decode and can do it just as fast as if it was done on a dGPU but using far lass power, MacOS dynamically manages dual GPU (IGPU + dGPU) configurations and how it does it depends a lot on the SMBIOS used.

Your using Mac Mini 2018 SMBIOS, real 2018 Mac Mini's have a IGPU with physical ports therefore MacOS will always favour using IQS for some encode/decode tasks as it knows the IGPU is always available even if a dGPU is available in an external enclosure connected via Thunderbolt 3 (real word example).

Apps such as FCPX will use a supported dGPU but it has to be coded into the App to do it.

There is no way as far as I know to force a system wide preference to use dGPU, other than using a SMBIOS for a system that does not have a IGPU ... however such systems use a Workstation class Xeon CPU which have an extended instruction set so trying to spoof a standard Desktop class CPU such as your i7 as a Xeon will lead to other problems and is not recommended.

My advice is to leave your IGPU enabled and let MacOS manage it and your dGPU dynamically which is how it's supposed to work, however you might want to consider switching to iMac18,3 SMBIOS which should be a better match for your hardware and configuring the IGPU as "Headless" as on a real iMac the IGPU has no physical ports.

Cheers
Jay
 
Last edited:
There is no way as far as I know to force a system wide preference to use dGPU, other than using a SMBIOS for a system that does not have a IGPU ... however such systems use a Workstation class Xeon CPU which have an extended instruction set so trying to spoof a standard Desktop class CPU such as your i7 as a Xeon will lead to other problems and is not recommended.

On 10.14.5:
defaults write com.apple.AppleGVA gvaForceAMDKE -bool YES

There's also gvaForceAMDAVCEncode, gvaForceAMDAVCDecode and gvaForceAMDHEVCDecode

There's no HEVCEncode flag, but gvaForceAMDKE will make it take the AMD path for HEVC encode too.

This is all assuming your AMD card is setup properly to act as a hardware encoder/decoder.

Note that if you try to use the individual encode/decode settings it may not always work; it depends on the type of 'flow id' being used. I've had some h264 encodes that don't match the flow id in AppleGVA but seem to work ok with gvaForceAMDKE
 
Thank you jaymonkey. I re-enabled the IGFX in BIOS and also set it as the primary display port (even though it's headless). By doing so both GPUs appear in the "System Information" list under "Graphics/Displays".

On 10.14.5:
defaults write com.apple.AppleGVA gvaForceAMDKE -bool YES
Thank you zakklol. I'll try this and see. I assume using "-bool NO" would undo this command?
 
I did quick video encoding tests with different settings. Forcing the gvaForceAMDKE allows using the AMD card only when doing H264 encoding. In case of HEVC encoding, gvaForceAMDKE value has no effect and the system seems to use both GPUs to encode.

BTW, Intel GPU gave better video quality when doing H264!
 

Attachments

  • Screen_Shot_2019-05-18_at_2_02_04_AM.png
    Screen_Shot_2019-05-18_at_2_02_04_AM.png
    64.4 KB · Views: 1,253
Thank you jaymonkey. I re-enabled the IGFX in BIOS and also set it as the primary display port (even though it's headless). By doing so both GPUs appear in the "System Information" list under "Graphics/Displays".


Thank you zakklol. I'll try this and see. I assume using "-bool NO" would undo this command?

That or defaults delete com.apple.AppleGVA gvaForceAMDKE
 
In case of HEVC encoding, gvaForceAMDKE value has no effect and the system seems to use both GPUs to encode.

BTW, Intel GPU gave better video quality when doing H264!


@techana,

Like I said MacOS will use both GPU's dynamically in a properly configured system, Intel IQS is very good both in terms of quality and speed when it comes to H246/H265 encode/decode which is why Apple favours it over using dGPU.

Cheers
Jay
 
Nice.

defaults write com.apple.AppleGVA gvaForceAMDHEVCDecode -bool YES

Makes my iMac17,1 SMBIOS decode 10-bit 4K HEVC on my RX480 with very low CPU load. Great.
This didn't work before. Smooth playback was only possible by stressing my 6700k(no hardware decoding)
H264 stuff still works fine too.
 
FWIW I have never gotten good FCPX performance on my i7-3770K/R9 280X system when the onboard Intel HD 4000 GPU was enabled and injected. I've followed all the guides and advice and different Clover boxes ticked and every time I enable Intel graphics alongside my AMD 280X, FCPX exports slow to a crawl. I can see in Activity Monitor that the Intel GPU is handling the FCPX export, but it takes about 3-4 times as long as if I don't disable the Intel graphics and just run the system with only the AMD GPU. Then all FCPX exports pin the 280X in Activity Monitor and finish very quickly.

Just another data point for the discussion. I don't think this is all so well understood as the mods believe. If it was, enabling my HD 4000 would speed up my FCPX exports instead of slow it to a snail's pace, even though the Intel GPU was running full steam. I am certainly open to suggestions here but for now I'm leaving Intel graphics off and letting my 280X run as the sole GPU. Mojave seems to understand this and everything works better this way, on my machine at least.

A few more observations: it seems to be a thing now to use apps like Videoproc to assess whether GPU hardware encoding/decoding is in play or not, but it’s my experience that every app is different, and some make full use of GPU hardware encoding/decoding even if, say, Videoproc shows it’s not enabled. What Videopro tells you is whether or not GPU hardware encoding/decoding is enabled for Videoproc. Not every app. Do yourself a favor and fire up Activity Monitor and look at GPU History as you encode/decode video with your various apps. Even if Videoproc indicates no hardware acceleration is possible, you’ll see your discrete GPU is handling the full processing task while your CPU idles.

It’s also important to understand that Intel Quick Sync was designed for real-time video telephony, not high quality video encoding. What’s impressive for Skype is not good enough for a Final Cut master. I see so much attention here about IQS and how enabling it is important, but nobody talks about its visual quality vs. standard encoding. Speed is not the only metric, especially if you’re doing pro work. Clients tend not to be impressed by quick turnarounds if the video looks just okay and they paid for excellent.

I would be delighted if my onboard Intel graphics delivered faster encodes with higher quality. It doesn’t work that way. Intel Quick Sync is treated here as some kind of free lunch but it does have a cost, and it is video quality.
 
Last edited:
IQS also got better with new generations, so the image quality on an HD630 is better than an hd4000. However, i still agree with you: it is not tuned for very high quality, it is tuned for speed. They've reached a pretty good balance now, but it still takes more bitrate to reach decent quality than another encoder that is tuned for quality at the expense of speed. IQS is 'good enough' but for some projects that is not actually good enough.

All VideoProc tests for is if it can create an encoder via the VideoToolbox framework, and if that encoder is hardware accelerated. That's it. It can't even tell WHICH hardware is accelerating the encoder. Although I have noticed in recent versions of macOS it may be possible to tease out the metal device id from the encoder properties, so that's something I may look into.

If other programs do encoding via custom metal compute shaders you don't know that. Most of their consumer level video encoding uses VideoToolbox though (and by extension, likely uses IQS). The pro apps (so FCPX) may do something different, or use undocumented capabilities.

I do wonder about the HEVC encoder though. People are seeing activity on both iGPU and dGPU when doing those encodes. Maybe the apple encoder is a custom multi-gpu compute based encoder? Or there's a bug there?

One other thing to note: the hardware encoders usually have resource limitations. There's a limit to how many streams they can encode and decode simultaneously. The appleGVA/VideoToolbox system frameworks likely handle this transparently. If the SMBIOS/board-id indicates it has both iGPU and dGPU encoders you may get a different one depending on what else is going on. And again, there's no documented way to request a specific encoder at runtime.
 
Status
Not open for further replies.
Back
Top