Contribute
Register

Debate - NVidia, AMD and Apple

Status
Not open for further replies.
Nothing about my posts should lead to that conclusion. There's no memory loss. Of course they have that control. Their actions in exercising that control have consequences to users. In this case the Mac user base is denied the ability to use the indisputably superior GPU architecture that is available to similar products (i.e. PCs) running alternative systems. It just calls into question the whole notion of the Mac as a serious platform (particularly in media, which is my field) if they are so unable to partner with an industry leader like this and settle instead on an inferior option for many customers.

I just don't think Apple has ever fully understood the GPU world since the debacle of the 2013 Mac Pro.

BTW, can you check the facts on the Red situation because REDRAW is not supported by the Afterburner but they did announce REDRAW decoding in Metal, presumably on the GPU.

What leads me to that conclusion is you think think Nvidia with there 11b market cap should have the ability to dictate to apple who has a 1.5T market cap how the drivers should work in their Product. It is not apples job to accommodate Nvidia it is the other way around. When you compare the 2080 vs the 5700 xt you get a meager 18% increase in performance over all. For this meager 18% you spend almost double in price. If you want to spend quadruple then you can get a card that is significantly better. AMD has not tried to compete with Nvidia but judging by the gains they have made on Intel if they actually dropped a card that was supposed to compete with the 2080TI it would compete. AMD and to a lesser extent ATI always produced the budget orientated cards that were only marginally behind.

The debacle with the 2013 Mac Pro was not really the GPU issue but instead the fact they tried to fit a pro level computer into 1' tall, 6" diameter cylinder. They made the same mistake years earlier with the G4 Cube. Understanding GPU had nothing to do with the 2013 Mac Pro. I have heard from many audio engineers that the Mac Pro Trash can is the best Mac they ever owned.

Don't worry they will port their codec.

PS it does not matter to you since your sitting here on a Hackintosh form you are not going to have an afterburner card. :)

Let me put it in simpler terms. Ford makes cars sometimes they move things around on a new model. Ford then tells the company that makes the mount for the alternator it needs to be made a new way because we moved it. That company; even if it makes mounts for every other car manufacture in the world, will make the part as ford requests. This is no different apple said hey we are going to do things this way we need you do to this way. Nvidia said Na we are not going to do what you ask. If this was a teenage conversation it would go like this:

Apple: Hey Nvidia we changing to metal api we feel it can leverage better power settings on our laptops. Use these perimeter for your driver.

Nvidia: Ok we will see what we can do.

Apple: Cool!

2 years later

Apple: Hey Nvidia you know we changed over to this metal API and you said you would do something. Customers are saying your drivers are trash can you fix them?

Nvidia: Na Man we can not fix those drivers for you... And we do not think that we want to let you write the driver either.

Apple: Ok Nvidia you leave me no choice we are going to have to depreciate your drivers from our OS and stop certifying your web drivers.

Nvidia: Ok then see ya later
 
Last edited:
@scottkendall

A couple of points to clarify

Firstly I doubt that you and I are in disagreement about very much, just a couple of things.

To amplify my point about the 2013 Mac Pro and relating it to my view about Apple's GPU problems:

At around about the 2012-2014 period it became noticeable that a lot of the creative workstations I was working with and around (Flame, Lustre, Resolve & Baselight) were getting GPU upgrades. It was the first time I and many others really became aware of the growing importance of GPU processing. It became pretty clear at least one GPU upgrade can be justified during the useful life of a pro workstation. At exactly the moment of this realisation Apple released a "workstation" class computer with a proprietary GPU module which was never upgraded. A real mis-read of customer needs. I've recommended the 2013 Mac Pro for production use and fully understand that it's fine for many cases. It's still a very faulty product from people who should have known better.

I need to pick apart a couple of your points about the relative GPU merits:
AMD has not tried to compete with Nvidia but judging by the gains they have made on Intel if they actually dropped a card that was supposed to compete with the 2080TI it would compete.

A hypothetical product is not a supporting argument. That should go without saying.

When you compare the 2080 vs the 5700 xt you get a meager 18% increase in performance over all.

The 2080 was launched 10 months before the 5700XT which illustrates the extent to which AMD lags in the development timeline. Also, 18% could be pretty profound depending on the need. It could be the difference between achieving locked 24fps playback and coming way short at 20.3fps.

What leads me to that conclusion is you think think Nvidia with there 11b market cap should have the ability to dictate to apple who has a 1.5T market cap how the drivers should work in their Product.
Let me put it in simpler terms. Ford makes cars sometimes they move things around on a new model.

I have a really different idea of how this relationship should work:

Market cap should really have nothing to do with what is essentially an issue of customer's needs. The Ford analogy is not the one I would use. A GPU is far more integral to the Mac's performance than an alternator is to the car's. A workstation is less like a car and more like a road bike. If Bianchi, for example, somehow contrived to make their frames incompatible with Shimano groupsets (for the sake of the debate let's assume it is possible to do that) then a proportion of customers would seriously consider a different brand of frame. Obviously it would be madness for Bianchi to cut off such a high profile supplier of a key component. Yet this is precisely what Apple has done, unilaterally, with negative consequences for the customer and getting a pass from a compliant trade press that's been gaslighted into accepting virtually anything that Apple does. The Mac media should be hammering Apple on this issue. What about the eGPU users that would really benefit from a Pascal or Turing GPU in their workflow?

I'm really perplexed at this because, to accommodate an Apple system upgrade, I had to remove a perfectly good product from my system and replace it with something inferior for reasons that have nothing to do with my needs.
 
I agree completely that the drivers under HS were pretty crappy. Nevertheless in my particular case when I swapped out my 1080ti for a Vega 56 in order to move to Mojave it was on balance a downgrade. Resolve performance under CUDA was superior to that under Metal and considerably superior to that under OpenCL. In addition Neat Video 4 became about half as fast due to the GPU providing no benefit. At the time I did the change the Vegas were the fastest GPUs sold by Apple while the 1080ti was already superseded by the 2080ti.
Why have you chosen the Vega 56 then? I have the Radeon 64 Liquid Cooling. It faster than a RTX 2080 at the Luxmark OpenCL benchmark.
In what way is OpenCL inferior?
 
Last edited:
@scottkendall

A couple of points to clarify

Firstly I doubt that you and I are in disagreement about very much, just a couple of things.

To amplify my point about the 2013 Mac Pro and relating it to my view about Apple's GPU problems:

At around about the 2012-2014 period it became noticeable that a lot of the creative workstations I was working with and around (Flame, Lustre, Resolve & Baselight) were getting GPU upgrades. It was the first time I and many others really became aware of the growing importance of GPU processing. It became pretty clear at least one GPU upgrade can be justified during the useful life of a pro workstation. At exactly the moment of this realisation Apple released a "workstation" class computer with a proprietary GPU module which was never upgraded. A real mis-read of customer needs. I've recommended the 2013 Mac Pro for production use and fully understand that it's fine for many cases. It's still a very faulty product from people who should have known better.

I need to pick apart a couple of your points about the relative GPU merits:

A hypothetical product is not a supporting argument. That should go without saying.

The 2080 was launched 10 months before the 5700XT which illustrates the extent to which AMD lags in the development timeline. Also, 18% could be pretty profound depending on the need. It could be the difference between achieving locked 24fps playback and coming way short at 20.3fps.




I have a really different idea of how this relationship should work:

Market cap should really have nothing to do with what is essentially an issue of customer's needs. The Ford analogy is not the one I would use. A GPU is far more integral to the Mac's performance than an alternator is to the car's. A workstation is less like a car and more like a road bike. If Bianchi, for example, somehow contrived to make their frames incompatible with Shimano groupsets (for the sake of the debate let's assume it is possible to do that) then a proportion of customers would seriously consider a different brand of frame. Obviously it would be madness for Bianchi to cut off such a high profile supplier of a key component. Yet this is precisely what Apple has done, unilaterally, with negative consequences for the customer and getting a pass from a compliant trade press that's been gaslighted into accepting virtually anything that Apple does. The Mac media should be hammering Apple on this issue. What about the eGPU users that would really benefit from a Pascal or Turing GPU in their workflow?

I'm really perplexed at this because, to accommodate an Apple system upgrade, I had to remove a perfectly good product from my system and replace it with something inferior for reasons that have nothing to do with my needs.

My Mac Pro 2013 has no issues and was never faulty it is still in use albeit not my daily driver it is the music studio. An alternator is as important to a car as a GPU is to a computer. If you do not have an alternator your car does not run, but that was not the point the point was a driver is very easy to write when that is what you do and your told exactly how they want it.

Market cap shows that apple is on a level that Nvidia is no where near, Apple is the market not Nvidia. However, most customers are fine with AMD GPU and the performance they are getting. The only people I see complaining about it in my purview is few Hackintosh users. But honestly Nvidia just has to correct their driver and submit it, once the have done that they could come back into the bubble till then they should keep their cooties out. Like I already said AMD are not competing because they are not trying to compete in that space. AMD GPU's are for a target group and that target does not buy super top end GPU for $1300. It not technically inferior if it was not meant to compete with 2080. But it does compete well against the cards that it was designed to compete with.

Don't worry soon enough you will not be able to do anything but use apple hardware or get out so maybe you should start considering that option.

Lastly a hypothetical product is actually a perfect argument because that's how all great things start by Hypotheses first.
 
Why have you chosen the Vega 56 then?
I wanted a configuration that actually matched a shipping iMac Pro (for no good reason) so the choice came down to a Vega 56 or Vega 64 FE but I don't really need 16GB of VRAM which in any case would require me to add more system RAM due to Autodesk Smoke requirements. I actually have a Vega 64 in my other Hack so I was already well aware of its performance potential. There's not a huge difference between a 56 and 64 in daily use TBH.

Regarding Luxmark my Vega 56 also outperforms the 1080ti but in the actual tools that I use the situation is reversed. Specifically in Resolve 15 on 10.13.6 the 1080ti (CUDA) outperformed the Vega 56 (OpenCL) by more than 60%. My methodology was to count how many noise nodes can be enabled in the Resolve "Candle" benchmark while maintaining locked 24fps replay. The numbers are 39 for 1080ti/CUDA and 23 for Vega 56/OpenCL and 24 for Vega56/Metal. These numbers were no surprise to me since I'd already had years of observation under my belt regarding various GPU configs using Resolve. After upgrading to 10.14.6 the Vega 56 numbers were slightly lower, interestingly, at 21 (OpenCL) and 23 (Metal). In this case the Luxmark numbers are merely a curiosity and irrelevant. The situation changes quite a lot when switching to Resolve 16 (my main reason for wanting to use Mojave). In this case, under Metal, 35 noise nodes can be enabled for locked replay which is a testament to the promise of Metal.

My Mac Pro 2013 has no issues and was never faulty ...
I probably should have used "flawed" instead of "faulty". The actual reliability of the machine seems pretty solid.

The only people I see complaining about it in my purview is few Hackintosh users.
Exactly, people like me for whom the ability to use what we regard as a "proper" GPU was a key driver to implementing Hackintoshes in production environments. This is a constituency that's accustomed to seeing 2, 3 & 4 Quadros attached to dual-Xeon Linux workstations, as well as the 4,1 and 5,1 Mac Pros.
 
I wanted a configuration that actually matched a shipping iMac Pro (for no good reason) so the choice came down to a Vega 56 or Vega 64 FE but I don't really need 16GB of VRAM which in any case would require me to add more system RAM due to Autodesk Smoke requirements. I actually have a Vega 64 in my other Hack so I was already well aware of its performance potential. There's not a huge difference between a 56 and 64 in daily use TBH.

Regarding Luxmark my Vega 56 also outperforms the 1080ti but in the actual tools that I use the situation is reversed. Specifically in Resolve 15 on 10.13.6 the 1080ti (CUDA) outperformed the Vega 56 (OpenCL) by more than 60%. My methodology was to count how many noise nodes can be enabled in the Resolve "Candle" benchmark while maintaining locked 24fps replay. The numbers are 39 for 1080ti/CUDA and 23 for Vega 56/OpenCL and 24 for Vega56/Metal. These numbers were no surprise to me since I'd already had years of observation under my belt regarding various GPU configs using Resolve. After upgrading to 10.14.6 the Vega 56 numbers were slightly lower, interestingly, at 21 (OpenCL) and 23 (Metal). In this case the Luxmark numbers are merely a curiosity and irrelevant. The situation changes quite a lot when switching to Resolve 16 (my main reason for wanting to use Mojave). In this case, under Metal, 35 noise nodes can be enabled for locked replay which is a testament to the promise of Metal.


I probably should have used "flawed" instead of "faulty". The actual reliability of the machine seems pretty solid.


Exactly, people like me for whom the ability to use what we regard as a "proper" GPU was a key driver to implementing Hackintoshes in production environments. This is a constituency that's accustomed to seeing 2, 3 & 4 Quadros attached to dual-Xeon Linux workstations, as well as the 4,1 and 5,1 Mac Pros.

Doesn't the TI have 11 GB of memory? So you took a hit on memory when you didn't have to and used the lesser of two cards and then complained. Your configuration will never be the same as a iMac Pro you do not have a T2 chip first of all and second of all they are using a Pro card not a RX card. Flawed yes they were locked in with the specs they had when released and not upgrade able just like the CUBE for for people who upgrade the full system every 3 years they were awesome till they did not make a replacement. The new Mac Pro can hold 2 3 & 4... If you were to use two of their proprietary modules you could get 6 GPU 4 from the modules and two standard off the shelf Vega VII's. Also let me ask you this... Are you using the Radeon boost file along with a fan speed fix? The boost file tuns something on that apple turned off for RX cards because they could not fix the Fan issue, the fan speed fix the dirty hack apple did that reduced the power of the card.
 
Doesn't the TI have 11 GB of memory? So you took a hit on memory when you didn't have to and used the lesser of two cards and then complained.
Just a reminder: I didn't set out to replace the 1080ti, which I was happy with. I needed to go to Mojave and the GPU downgrade was collateral damage. If you mean I could have used a 64 then, true, but I know that the 64 still would be inferior to the 1080ti. I also can't use the Vega 64 that I have because it's too wide and I can't afford to lose the extra PCIe slot. Given that the Vega 64 FE is very difficult to come by I really did have to take the hit on the VRAM. Despite everything I'm not unhappy with the Vega 56. The loss of viable GPU acceleration in Neat Video is the main area of sadness.

Don't misunderstand me here, I'm not howling at the moon and shaking my fist at Apple: "You bastards, you've ruined my system!!", it's not like that at all. Actually there's little noticeable difference since the OS/GPU switch in daily use which goes to prove that it's easy to get a bit overly obsessive about numbers. In addition that general feel of the system around the Finder and in Safari is noticeably smoother since the switch.

Regarding the VRAM, Smoke wants to see 3X VRAM in system RAM and will limit its VRAM pool accordingly. I have 32GB installed and so I never actually got at the full 11GB in Smoke. I think it might have even been less than 10GB. As always, you have enough RAM or VRAM until you don't and I seem to have enough at 8GB. The word from an Autodesk support tech was "8 gig is fine". I did try to add 32GB to the system last year but MacOS didn't see it all whereas Windows 10 did.

Are you using the Radeon boost file along with a fan speed fix?

I'm using the SSDT supplied by KGP which is customised for the card I'm using, the MSI Airboost. I think it achieves the same thing as the Radeon Boost. I found that the native Mojave driver fixed the fan issues anyway but the SSDT overclocks the card.

... they are using a Pro card not a RX card.

Do you happen to know what the actual difference is between RX and Pro. I'd love to know. It's the same silicon but different device id.
 
Last edited:
Just a reminder: I didn't set out to replace the 1080ti, which I was happy with. I needed to go to Mojave and the GPU downgrade was collateral damage. If you mean I could have used a 64 then, true, but I know that the 64 still would be inferior to the 1080ti. I also can't use the Vega 64 that I have because it's too wide and I can't afford to lose the extra PCIe slot. Given that the Vega 64 FE is very difficult to come by I really did have to take the hit on the VRAM. Despite everything I'm not unhappy with the Vega 56. The loss of viable GPU acceleration in Neat Video is the main area of sadness.

Don't misunderstand me here, I'm not howling at the moon and shaking my fist at Apple: "You bastards, you've ruined my system!!", it's not like that at all. Actually there's little noticeable difference since the OS/GPU switch in daily use which goes to prove that it's easy to get a bit overly obsessive about numbers. In addition that general feel of the system around the Finder and in Safari is noticeably smoother since the switch.

Regarding the VRAM, Smoke wants to see 3X VRAM in system RAM and will limit its VRAM pool accordingly. I have 32GB installed and so I never actually got at the full 11GB in Smoke. I think it might have even been less than 10GB. As always, you have enough RAM or VRAM until you don't and I seem to have enough at 8GB. The word from an Autodesk support tech was "8 gig is fine". I did try to add 32GB to the system last year but MacOS didn't see it all whereas Windows 10 did.



I'm using the SSDT supplied by KGP which is customised for the card I'm using, the MSI Airboost. I think it achieves the same thing as the Radeon Boost. I found that the native Mojave driver fixed the fan issues anyway but the SSDT overclocks the card.



Do you happen to know what the actual difference is between RX and Pro. I'd love to know. It's the same silicon but different device id.

Yes you were collateral damage because your choice of video card but Nvidia could have played the game honestly there was no reason not to. What should make you more mad is that Nvidia did not want to develop the driver in the way they were asked and told it would function properly. Vega 64 FE was really easy to come by when Mojave released now you might have to get a WX9100 more or less the same card readily available today if you live in the US. The word from Autodesk is more memory is always better period. I been using Autodesk products for 20 years and more memory is always better. As for MacOS not seeing your memory you probably needed to map it in your boot loader. Radeon boost is a kext not a SSDT however, the SSDT that fixes the fan has a small overclock to it. You probably could use your SSDT with the kext if you notice the fan does not calm back down after working then you should try the SSDT from the link below. For straight testing purposes you could remove the SSDT and only the Kext to see if it gives you a boost.

You can download the kext for the boost make sure you grab the one that is Vega enabled and the SSDT here make sure you grab the SSDT for Vega 56 slot 1. You will also need to disable WEG I think



The kext turns back on FalconSMUFW and the SSDT fixes the fans and does some renaming also. Apples solution for fixing RX Vega fan issues was to disable FalconSMUFW causing roughly a 30% decrease in RX Vega and much much more on the VII at least from a geek bench stand point. The dirty hack apple did I believe was released with Mojave so someone in High Sierra with Vega had better performance than with Mojave. Honestly I never found a difference but I am not video encoding only still renderings that are a pulled from Twinmotion.
 
Yes you were collateral damage because your choice of video card but Nvidia could have played the game honestly there was no reason not to. What should make you more mad is that Nvidia did not want to develop the driver in the way they were asked and told it would function properly. Vega 64 FE was really easy to come by when Mojave released now you might have to get a WX9100 more or less the same card readily available today if you live in the US. The word from Autodesk is more memory is always better period. I been using Autodesk products for 20 years and more memory is always better. As for MacOS not seeing your memory you probably needed to map it in your boot loader. Radeon boost is a kext not a SSDT however, the SSDT that fixes the fan has a small overclock to it. You probably could use your SSDT with the kext if you notice the fan does not calm back down after working then you should try the SSDT from the link below. For straight testing purposes you could remove the SSDT and only the Kext to see if it gives you a boost.

You can download the kext for the boost make sure you grab the one that is Vega enabled and the SSDT here make sure you grab the SSDT for Vega 56 slot 1. You will also need to disable WEG I think



The kext turns back on FalconSMUFW and the SSDT fixes the fans and does some renaming also. Apples solution for fixing RX Vega fan issues was to disable FalconSMUFW causing roughly a 30% decrease in RX Vega and much much more on the VII at least from a geek bench stand point. The dirty hack apple did I believe was released with Mojave so someone in High Sierra with Vega had better performance than with Mojave. Honestly I never found a difference but I am not video encoding only still renderings that are a pulled from Twinmotion.
Thanks for all that. I'll give it a spin when I get a chance.
 
Are you using the Radeon boost file along with a fan speed fix? The boost file tuns something on that apple turned off for RX cards because they could not fix the Fan issue, the fan speed fix the dirty hack apple did that reduced the power of the card.
I am using a custom SoftPowerPlayTable for my Radeon 64 with reduced GPU clock, undervolting and increased RAM clock. That has increased the OpenCL benchmark values significantly.
 
Status
Not open for further replies.
Back
Top