Contribute
Register

How to Enable AMD GPU Encoding instead of Intel's?

Status
Not open for further replies.
FWIW I have never gotten good FCPX performance on my i7-3770K/R9 280X system when the onboard Intel HD 4000 GPU was enabled and injected. I've followed all the guides and advice and different Clover boxes ticked and every time I enable Intel graphics alongside my AMD 280X, FCPX exports slow to a crawl. I can see in Activity Monitor that the Intel GPU is handling the FCPX export, but it takes about 3-4 times as long as if I don't disable the Intel graphics and just run the system with only the AMD GPU. Then all FCPX exports pin the 280X in Activity Monitor and finish very quickly.

Just another data point for the discussion. I don't think this is all so well understood as the mods believe. If it was, enabling my HD 4000 would speed up my FCPX exports instead of slow it to a snail's pace, even though the Intel GPU was running full steam. I am certainly open to suggestions here but for now I'm leaving Intel graphics off and letting my 280X run as the sole GPU. Mojave seems to understand this and everything works better this way, on my machine at least.

A few more observations: it seems to be a thing now to use apps like Videoproc to assess whether GPU hardware encoding/decoding is in play or not, but it’s my experience that every app is different, and some make full use of GPU hardware encoding/decoding even if, say, Videoproc shows it’s not enabled. What Videopro tells you is whether or not GPU hardware encoding/decoding is enabled for Videoproc. Not every app. Do yourself a favor and fire up Activity Monitor and look at GPU History as you encode/decode video with your various apps. Even if Videoproc indicates no hardware acceleration is possible, you’ll see your discrete GPU is handling the full processing task while your CPU idles.

It’s also important to understand that Intel Quick Sync was designed for real-time video telephony, not high quality video encoding. What’s impressive for Skype is not good enough for a Final Cut master. I see so much attention here about IQS and how enabling it is important, but nobody talks about its visual quality vs. standard encoding. Speed is not the only metric, especially if you’re doing pro work. Clients tend not to be impressed by quick turnarounds if the video looks just okay and they paid for excellent.

I would be delighted if my onboard Intel graphics delivered faster encodes with higher quality. It doesn’t work that way. Intel Quick Sync is treated here as some kind of free lunch but it does have a cost, and it is video quality.

I agree with you, since I disabled my iGPU (3770k) and use imac pro 1.1 SMBIOS, everything is way faster. On top of that, preview in finder works, I do not need NoVPAJpeg anymore :)
 
That’s interesting, I know that newer generations of Intel integrated graphics offer _faster_ IQS encoding, but are you sure they also offer _better quality_ as well? I was under the impression that the end result still looks the same. I know my Skylake system can do faster IQS encoding than my Ivy but I don’t see any difference in video quality. Either way, I am much happier with letting my 280X handle FCPX ProRes master encodes, and my i7-3770K handle subsequent Handbrake x264 deliveries. The video quality at both stages is noticeably better than any generation of IQS I’ve tested (up to Skylake).

Overall I think the rush to HEVC/H.265 is premature. The codec is not ready for prime time, and is widely misunderstood by consumers and pros alike. H.265 is not some magic bullet that somehow shrinks file size and lessens the workload on your computer. It’s just a codec that trades away _encoding_ complexity for _decoding_ complexity. Meaning, less power and time are needed for encoding a file, but more power is needed to decode that file during playback. “High Efficiency” in HEVC video and HEIF photos only applies to the content creators. For the rest of us consuming that content, HEVC means lower efficiency. More power needed to view files. More expensive to open every video on your phone, tablet, TV or computer.

That’s not how it’s supposed to be. We pay Netflix, Amazon, Apple et al to push their server farms hard to encode easily streamable files that don’t tax our playback devices. That’s how it works with H.264. Far from being “one better, innit?”, all H.265 does is pass the hard processing work on to consumers. The thinking is that eventually silicon efficiencies will shrink to the point where the added complexity at the decoding end is trivial, but even if that’s remotely feasible at this point (RIP Moore) we’re getting way ahead of ourselves moving everything to HEVC just as the consumer base is moving to 4K. All HEVC means right now is lower quality video at higher cost-per-playback. First thing I do with any new iPhone or iPad is switch its camera format from HEVC/HEIF to ”Most Compatible” i.e. H.264/JPG. I mean, iPhone video is crummy enough, does Apple need to make it even crummier by default?

I try not to be a fatalist but H.265 was taken out of the oven way too soon. We should’ve waited for H.270 or higher before declaring it the new standard for 4K video. All H.265 does is fit more files on server farms and lower content providers’ utility bills. The rest of us pay higher utility bills, need to buy new devices more frequently to keep up with the processing demands of “High Efficiency” media, and wonder why our brand new phone can’t seem to go a whole day on a charge.

Okay, back to the Hack. Presumably we’re all here because we’re ill-served by Apple’s hardware offerings, and we’re trying to build Macs more optimized for our specific needs (or we’re just cheap bastards, take your pick). My primary work on a Mac is in FCPX so I’m well-served by an overclocked i7-3770K if I pair it with a sufficiently powerful and MacOS-friendly GPU like the 280X. For _me_, pro quality video encoding is not merely desirable but non-negotiable. For someone else, none of this moaning about who has to do the heavy lifting when it comes to watching GOT on your phone probably means anything. But I think it’s beneficial to discuss what IQS actually is, what it does, what it doesn’t do, and why it may be far less important than you think. Because I see so much concern here about whether IQS is indeed being enabled on this build or that and I really do think that concern is largely misplaced.

That said, it may also be true that as a non-gamer, I am clueless about how life or death IQS is if you’re trying to hit high FPS on a budget build. I would love to hear from you if you fit that profile, and it would help de-mystify IQS even further for those chasing it without really understanding why.


IQS also got better with new generations, so the image quality on an HD630 is better than an hd4000. However, i still agree with you: it is not tuned for very high quality, it is tuned for speed. They've reached a pretty good balance now, but it still takes more bitrate to reach decent quality than another encoder that is tuned for quality at the expense of speed. IQS is 'good enough' but for some projects that is not actually good enough.

All VideoProc tests for is if it can create an encoder via the VideoToolbox framework, and if that encoder is hardware accelerated. That's it. It can't even tell WHICH hardware is accelerating the encoder. Although I have noticed in recent versions of macOS it may be possible to tease out the metal device id from the encoder properties, so that's something I may look into.

If other programs do encoding via custom metal compute shaders you don't know that. Most of their consumer level video encoding uses VideoToolbox though (and by extension, likely uses IQS). The pro apps (so FCPX) may do something different, or use undocumented capabilities.

I do wonder about the HEVC encoder though. People are seeing activity on both iGPU and dGPU when doing those encodes. Maybe the apple encoder is a custom multi-gpu compute based encoder? Or there's a bug there?

One other thing to note: the hardware encoders usually have resource limitations. There's a limit to how many streams they can encode and decode simultaneously. The appleGVA/VideoToolbox system frameworks likely handle this transparently. If the SMBIOS/board-id indicates it has both iGPU and dGPU encoders you may get a different one depending on what else is going on. And again, there's no documented way to request a specific encoder at runtime.
 
Last edited:
Thanks for the feedback. As a fellow i7-3770K user (Q2 2012 in da howse) I recommend you try changing your SMBIOS in Clover to iMac 13.2 and generate an optimized SSDT for it with ssdtPRGen:


If you’re not overclocking, just drop ssdtPRGen.sh into Terminal to create your ssdt.aml. I OC’d my i7-3770K to 4.4GHz in BIOS, so my Terminal command was:

~/ssdtPRGen.sh -c 2 -t 77 -turbo 4400 -x 0

(So for a 4.5GHz OC, change the 4400 to 4500, and so on)

Drop that new ssdt.aml into EFI/Clover/ACPI/Patched, check the “Drop OEM” box under SSDT in Clover’s ACPI section to get your OC, and you should see better performance and lower idle draw than using the iMacPro SMBIOS with your i7-3770K. I went that route too but actually the good old Ivy SMBIOS still works best if you pair it with the right SSDT. Compare Geekbench numbers (especially single-core) with both setups if you want to feel better about “going back” to an older SMBIOS.

I agree with you, since I disabled my iGPU (3770k) and use imac pro 1.1 SMBIOS, everything is way faster. On top of that, preview in finder works, I do not need NoVPAJpeg anymore :)
 
Thank you jaymonkey. I re-enabled the IGFX in BIOS and also set it as the primary display port (even though it's headless). By doing so both GPUs appear in the "System Information" list under "Graphics/Displays".


@techana,

Only just noticed the above message ..... your dGPU should be set as the primary when using IGPU in headless mode and it should not show up in System Information -> Graphics / Displays, if does you have not configured it correctly (probably wrong PlatformID).

For guidance see the "Headless" section of this guide :-


Cheers
Jay
 
Last edited:
Thanks for the feedback. As a fellow i7-3770K user (Q2 2012 in da howse) I recommend you try changing your SMBIOS in Clover to iMac 13.2 and generate an optimized SSDT for it with ssdtPRGen:


If you’re not overclocking, just drop ssdtPRGen.sh into Terminal to create your ssdt.aml. I OC’d my i7-3770K to 4.4GHz in BIOS, so my Terminal command was:

~/ssdtPRGen.sh -c 2 -t 77 -turbo 4400 -x 0

(So for a 4.5GHz OC, change the 4400 to 4500, and so on)

Drop that new ssdt.aml into EFI/Clover/ACPI/Patched, check the “Drop OEM” box under SSDT in Clover’s ACPI section to get your OC, and you should see better performance and lower idle draw than using the iMacPro SMBIOS with your i7-3770K. I went that route too but actually the good old Ivy SMBIOS still works best if you pair it with the right SSDT. Compare Geekbench numbers (especially single-core) with both setups if you want to feel better about “going back” to an older SMBIOS.

This is what I have done prior switching to iMac pro SMBIOS (mine is OC at 4,3ghz). I will try to compare both setup as I kept my EFI :)

EDIT : see pictures attached. iMac Pro SMBIOS performs better on my system (3770K@4,3ghz for both).
 

Attachments

  • Imac Pro.png
    Imac Pro.png
    146.8 KB · Views: 197
  • Imac142.png
    Imac142.png
    140.8 KB · Views: 199
Last edited:
On 10.14.5:
defaults write com.apple.AppleGVA gvaForceAMDKE -bool YES

There's also gvaForceAMDAVCEncode, gvaForceAMDAVCDecode and gvaForceAMDHEVCDecode

There's no HEVCEncode flag, but gvaForceAMDKE will make it take the AMD path for HEVC encode too.

This is all assuming your AMD card is setup properly to act as a hardware encoder/decoder.

Note that if you try to use the individual encode/decode settings it may not always work; it depends on the type of 'flow id' being used. I've had some h264 encodes that don't match the flow id in AppleGVA but seem to work ok with gvaForceAMDKE

Thanks again zakklol, don't want to pollute this thread but I now have working iTunes DRM and H264 encode/decode with my RX580...
 
@techana,

Only just noticed the above message ..... your dGPU should be set as the primary when using IGPU in headless mode and it should not show up in System Information -> Graphics / Displays, if does you have not configured it correctly (probably wrong PlatformID).

For guidance see the "Headless" section of this guide :-


Cheers
Jay
Thank you. I set the dGPU as the primary display adapter in BIOS. The Intel GPU is no longer in the "System Information" but it is still detectable by video encoding apps, such as VidoProc, and also appears in Activity Monitor > GPU History.
 
Hi everyone,

I'm having a similar problem. I would like to run my hackintosh on iMacPro1,1 with igpu disabled and enable hardware acceleration on my RX Vega 64. Video Proc shows that HW acceleration is not available.

How can I enable it to use only Vega 64 with HW acceleration?
 
Thanks again zakklol, don't want to pollute this thread but I now have working iTunes DRM and H264 encode/decode with my RX580...
When you say you have H.264 encode and decode with your RX580, what software are you using for H.264 encoding, and how are you verifying that it's the GPU handling the encoding instead of the CPU? I'm curious because I'm trying to determine if the RX580 has certain MacOS abilities my 280X doesn't. So far I haven't been able to configure Clover to force any app to throw H.264 encoding to the GPU except for FCPX.
 
When you say you have H.264 encode and decode with your RX580, what software are you using for H.264 encoding, and how are you verifying that it's the GPU handling the encoding instead of the CPU? I'm curious because I'm trying to determine if the RX580 has certain MacOS abilities my 280X doesn't. So far I haven't been able to configure Clover to force any app to throw H.264 encoding to the GPU except for FCPX.

Alas, you're right, the only thing mentioning hardware encoding/decoding for H.264 is Videoproc. Please note that the iGPU isn't even activated in the BIOS...
407842
 
Status
Not open for further replies.
Back
Top