Contribute
Register

AMD Radeon Performance Enhanced SSDT

Status
Not open for further replies.
Still the same - it's definitely picking up an injection from somewhere.
You have several drives - may be several Clover partitions ??
Potential for loading Clover on one drive but booting the OS on another ??
I see!
My backup drive SSD (cloned with it's own EFI with carbon copy cloner+script) is permanently connected.
But i'm booting from my main nVME system EFI.
That's probably what you see, does it have an impact on my system?:shifty:


Edit: i also have a third SSD on windows connected for dual booting.
 
Last edited:
I see!
My backup drive SSD (cloned with it's own EFI with carbon copy cloner+script) is permanently connected.
But i'm booting from my main nVME system EFI.
That's probably what you see, does it have an impact on my system?:shifty:


Edit: i also have a third SSD on windows connected for dual booting.

You're fine, trust me ;).

That's the normal injection from Clover when you check AMD GPU injection+ the framebuffer.
 
Search for hidpi script and will give you more options.
 
Not on an RX580.
There's no more performance to be squeezed out of an RX580 with any mods.
Hi, i running an RX 560 - much slower than an RX 580.
BUT when i use those Clover properties (Screenshoot) - i also get about 30% boost in Metal and! OpenCL Geekbench 5.

Beside that, as some others reported, i cant see any boost in other OpenCL or Metal Benches. Sure the benches also OpenCL may use other code so some difference in the boost %. But for example Luxmark - also OpenCL - or basemark (Metal Game Bench, https://www.basemark.com/benchmarks/basemark-gpu/) the boost is really near 0%, zero. I dont want to say it has no generell effect (some speak about higher bandwith of VRAM) but for me its unclear in the boost only happens in very special conditions or even only in geekbench.
Feel free to use my AMD GPU Menu tool to check perhaps gpu/vram clocking for power management working, which displays lot of RX 4xx/5xx properties like GPU/VRAM clk MHz, temp, watt etc. - plus it defects and shows AMD HW dec/enc in use. Normally you cant check that, because even HW dec /Enc is in action, the gpu load is very low. Screenshoot shows HW Enc is in action = Yes, even Videoproc shows wrong Enc informations ;)
Bildschirmfoto 2020-05-11 um 10.08.05.jpg

Bildschirmfoto 2020-05-11 um 09.47.33.jpg


PS: My AMD Menue tool doesn't need any sensor kexts! It get the values direct from the AMD driver. THats also the reason why the tool only works with RX 4xx/5xx cards and not on Vega, VII,.... the driver didnt have those those values.
 

Attachments

  • Bildschirmfoto 2020-04-25 um 10.23.49.jpg
    Bildschirmfoto 2020-04-25 um 10.23.49.jpg
    114.5 KB · Views: 222
  • AMD GPU Menue V0.85.zip
    7.3 MB · Views: 265
Last edited:
Hi, i running an RX 560 - much slower than an RX 580.
BUT when i use those Clover properties (Screenshoot) - i also get about 30% boost in Metal and! OpenCL Geekbench 5.

Beside that, as some others reported, i cant see any boost in other OpenCL or Metal Benches. Sure the benches also OpenCL may use other code so some difference in the boost %. But for example Luxmark - also OpenCL - or basemark (Metal Game Bench, https://www.basemark.com/benchmarks/basemark-gpu/) the boost is really near 0%, zero. I dont want to say it has no generell effect (some speak about higher bandwith of VRAM) but for me its unclear in the boost only happens in very special conditions or even only in geekbench.
Feel free to use my AMD GPU Menu tool to check perhaps gpu/vram clocking for power management working, which displays lot of RX 4xx/5xx properties like GPU/VRAM clk MHz, temp, watt etc. - plus it defects and shows AMD HW dec/enc in use. Normally you cant check that, because even HW dec /Enc is in action, the gpu load is very low. Screenshoot shows HW Enc is in action = Yes, even Videoproc shows wrong Enc informations ;)
View attachment 468761
View attachment 468762


PS: My AMD Menue tool doesn't need any sensor kexts! It get the values direct from the AMD driver. THats also the reason why the tool only works with RX 4xx/5xx cards and not on Vega, VII,.... the driver didnt have those those values.

I agree with you. Benchmarks vary considerably. E.G. with GB4 people are seeing no real change but with GB5, they see 30-40% improvement on non-RX580 GPU. So it depends on the type of work loads each benchmark uses which in turn depends on the use of VRAM on the GPU.
Then, back in real-world with real-apps, the changes may not see any App boost at all whereas people were seeing 30-40% boost in benchmarks.

That's a great tool you developed. I use it all the time to check the Enc/Dec function of the GPU - but it was the first one you posted along time ago. Thanks for the update. I suggest you that you start you own topic for the tool in the Graphics section of these forums and post your tool there so that everyone can benefit and you can post updates there ;)
 
I agree with you. Benchmarks vary considerably. E.G. with GB4 people are seeing no real change but with GB5, they see 30-40% improvement on non-RX580 GPU. So it depends on the type of work loads each benchmark uses which in turn depends on the use of VRAM on the GPU.
Then, back in real-world with real-apps, the changes may not see any App boost at all whereas people were seeing 30-40% boost in benchmarks.

That's a great tool you developed. I use it all the time to check the Enc/Dec function of the GPU - but it was the first one you posted along time ago. Thanks for the update. I suggest you that you start you own topic for the tool in the Graphics section of these forums and post your tool there so that everyone can benefit and you can post updates there ;)

It’s the other way, Geek4 gives a higher score and Geek5 gives the same score.

In real apps you get better performance only with metal apps and open cl, I get better in FCPX, someone else can confirm.
 
It’s the other way, Geek4 gives a higher score and Geek5 gives the same score.

In real apps you get better performance only with metal apps and open cl, I get better in FCPX, someone else can confirm.
That was my point....even benchmark from the same company are different...because of what they choose to use as workloads. It's the real Apps that count.

Which GPU gives you "better" FCPX and how are you measuring the difference ? Bruce's test ?
 
Yes I know that
but what I mean is when I choose default for display option it sends me back again to
1080p resolution and this what I don't want.
What I want is when I choose Default for display it should send me to 2160p resolution, not 1080p.
if you have been connected you Hackintosh or your mac with A 4k TV Monitor, You will see that

is there any solution for that?

Thanks

View attachment 468697

This happens because macOS is trying to treat it as a Retina Display so it is rendering the native 4K but then displaying it at 1080P. If you adjust the the level of scaling to More Space and look in System Information -> Graphics/Displays then you'll see it is rendering it in 8K and then displaying in 4K.

Screen Shot 2020-05-11 at 9.38.17 AM.png
 
Are you able to see power usage for a RX 5700 XT? I have tried all of the various apps and haven't found one that will show me anything other than GPU Usage percentage. I am using VirtualSMC if that matters.
 
Status
Not open for further replies.
Back
Top