Contribute
Register

X299 Big Sur Support

Status
Not open for further replies.
thank u very much!!!
 
... When you can publish any result of your 6900xt, greetings. !!
Hey WC, Just replying to give you two results. The first one is with the new 6900XT giving me a LuxMark score of 57012:

6900XT LuxMark 57012.jpeg


Now on my other system running 2 x Radeon VII's it's giving a LuxMark score of 100859:

Dual Radeon VII LuxMark 100859.jpeg


I tried to have it give me test just doing one card, but it wouldn't let me. So, I just divided it in half 50,430. So in the end if I had 2 x 5900XT running, it would be much faster. I know Izo mentioned that it would be overkill for my system. I use Photoshop, Premiere and a couple of other apps. But from what he said that Adobe (I guess), doesn't take advantage of dual GPU's. If I'm incorrect on this, please let me know.

I just like to have as much power as possible. That's just me and knowing that I can sell these VII's for $2,200 and can get 6900XT's for $1,700. That's a $500 difference in profit that I can make. But, if having that extra card is pointless and if anyone can provide some info of how negligible it would be to have that 2nd card in my system, then I will not get it. Look forward to the replies. Thanks.

Aside from that, I'm lovin' this new card and can't wait to get the other one for my main system.
 
Last edited:
He's much better!
The Radeon Vii was very inferior even though it was AMD's flagship card
This generation of video cards is a true competitor of nvdia and has the power!

In my Radeon VII I would get 60,000 maybe 70,000 points (geekbench)
With AMD BOOST KEXT (unstable!) I got 100,000 points

At 6900 xt I get 165,000 points (without any extra kext)

So the performance is there!
Hey ICEV, You were right! I just did a GB Metal Score test and with the Radeon VII I got 67310:

Radeon VII GeekBench Metal Score 67310.jpeg

and with the GB Metal Score test and with the 6900XT I got 159847:

6900XT GeekBench Metal Score 159847.jpeg

This is AMAZING! Almost TWICE the performance! Thanks a lot for you turning me on to this card!
 
Hey ICEV, You were right! I just did a GB Metal Score test and with the Radeon VII I got 67310:

View attachment 522424

and with the GB Metal Score test and with the 6900XT I got 159847:

View attachment 522425

This is AMAZING! Almost TWICE the performance! Thanks a lot for you turning me on to this card!

LuxMark is a synthetic benchmark, Geekbench is more real world.

Also OpenCL is deprecated in macOS and replaced with Apple's native graphics layer Metal. So you will see OpenCL scores lower than Metal if there are any synthetic benchmarks that have both options.

RDNA2 6900XT is leaps and bounds faster than the last GCN card (Radeon VII) by miles, except for some specific computational tasks due to the high bandwidth HBM2 (1TB/sec) VRAM on the Radeon VII.

Not only is 6900XT a better GPU, but it also is more power efficient.

I am still curious about what you are using these GPUs for, can you elaborate a little bit on what you do with them?

Enjoy the new card
 
I was also with two Radeon VII, I like to do performance tests and have the maximum power in my system but so many tests have led me to a reality that nobody likes, the performance tests have nothing to do with the use of the applications in the real world, I have done tests with Premiere, Photoshop, after efffects (pugetbench) and the results are amazing, I can only say that with a 4gb Sapphire Nitro + 580 you would be surprised, this is because (on OSX) Not all applications are optimized for the latest models presented by manufacturers and, sometimes, a card like the 580 can rival other superior ones, I will leave you some tables that I created before buying the 6900xt. As for the dual process, Davinci is the program that best takes advantage of the sum of graphics but of course, only in the GPU processes, in the latest versions of Trapcode they have updated some plugins so that you can choose between CPU or GPU process, in the 3D world it's different, at least in Cinema 4D I understand that the ProRender engine supports all graphics and squeezes them out very well, I just hope that Apple optimizes Big Sur so that it recognizes and takes advantage of more than one graphic, although we have been waiting for that for a long time.
 

Attachments

  • Comparativa.xlsx.zip
    20.7 KB · Views: 46
Hey guys! In order to keep my system fresh to run as long as possible (I could decide to skip M1X Mac mini if it will be disappointing for example, or just decide to wait for 2nd gen Mac Apple Silicon) and keeping its value as high as possible (I'll sell it when I'll finally switch to ARM Mac) I'm considering the actual environment.

Please keep in mind that I'm just an hardware enthusiast (who isn't in this thread? ;-) ) but I don't really need much multicore performance given the fact I use my system basically for web browsing, email, office suite, super lightweight music production (simply guitar recording over a backtrack with GarageBand) and occasional gaming in windows. That's why I'm considering the slightly single core higher clocked 10th gen Cascade Lake X (over my Skylake X) and RX 6800 XT (over my 5700 XT). Higher single core clocks should benefit my typical usage, instead of a 18 core multicore workhorse I suppose!?

At the moment I should be able to sell my 9980XE for 700€ and the 5700XT for the same €750 amount and replace them with:

9980XE 700€ --> 10980XE 910€ | 210€ loss (I said I don't care multicore, but I also said I'm an enthusiast :p )

9980XE 700€ --> 10920X 570€ | 130€ gain

5700XT 750€ --> 6800XT 950€ | 200€ loss (A guy living in my town bought this on AMD.com)

So, considering the amount I reported and obviously the hassle to buy/sell/replace parts (no water loop here):

should I swap 9th gen for 10th gen? If so... 12 or 18 core? will they both hold value? any further consideration?

should I swap 5700XT for 6800XT? Drivers for RDNA2 are already optimized similar to RDNA1?

Thanks to whoever will decide to give me his opinion, report eventual issues with those parts in you experience.
 
Hey WC, Just replying to give you two results. The first one is with the new 6900XT giving me a LuxMark score of 57012:

View attachment 522422

Now on my other system running 2 x Radeon VII's it's giving a LuxMark score of 100859:

View attachment 522423

I tried to have it give me test just doing one card, but it wouldn't let me. So, I just divided it in half 50,430. So in the end if I had 2 x 5900XT running, it would be much faster. I know Izo mentioned that it would be overkill for my system. I use Photoshop, Premiere and a couple of other apps. But from what he said that Adobe (I guess), doesn't take advantage of dual GPU's. If I'm incorrect on this, please let me know.

I just like to have as much power as possible. That's just me and knowing that I can sell these VII's for $2,200 and can get 6900XT's for $1,700. That's a $500 difference in profit that I can make. But, if having that extra card is pointless and if anyone can provide some info of how negligible it would be to have that 2nd card in my system, then I will not get it. Look forward to the replies. Thanks.

Aside from that, I'm lovin' this new card and can't wait to get the other one for my main system.

I use all the Adobe software, probably more than anyone here, and I can say that dual Radeon VIIs or even Dual 6900XTs are overkill.

The only app that uses Metal is Premiere and it won't support Dual GPUs properly anyway. Even if it does, you gain nothing.

Photoshop does use Metal, but I can guarantee you that my workload is heavier than yours and I have 0 issues even with a single Radeon VII.

You are better off getting more RAM (64GB or more) than dual GPUs.
 
Hey guys! In order to keep my system fresh to run as long as possible (I could decide to skip M1X Mac mini if it will be disappointing for example, or just decide to wait for 2nd gen Mac Apple Silicon) and keeping its value as high as possible (I'll sell it when I'll finally switch to ARM Mac) I'm considering the actual environment.

Please keep in mind that I'm just an hardware enthusiast (who isn't in this thread? ;-) ) but I don't really need much multicore performance given the fact I use my system basically for web browsing, email, office suite, super lightweight music production (simply guitar recording over a backtrack with GarageBand) and occasional gaming in windows. That's why I'm considering the slightly single core higher clocked 10th gen Cascade Lake X (over my Skylake X) and RX 6800 XT (over my 5700 XT). Higher single core clocks should benefit my typical usage, instead of a 18 core multicore workhorse I suppose!?

At the moment I should be able to sell my 9980XE for 700€ and the 5700XT for the same €750 amount and replace them with:

9980XE 700€ --> 10980XE 910€ | 210€ loss (I said I don't care multicore, but I also said I'm an enthusiast :p )

9980XE 700€ --> 10920X 570€ | 130€ gain

5700XT 750€ --> 6800XT 950€ | 200€ loss (A guy living in my town bought this on AMD.com)

So, considering the amount I reported and obviously the hassle to buy/sell/replace parts (no water loop here):

should I swap 9th gen for 10th gen? If so... 12 or 18 core? will they both hold value? any further consideration?

should I swap 5700XT for 6800XT? Drivers for RDNA2 are already optimized similar to RDNA1?

Thanks to whoever will decide to give me his opinion, report eventual issues with those parts in you experience.

Do not get the 10980XE. The 9980XE is a better CPU.

10980XE is actually a downgrade because Intel added more mitigation into the CPU itself and its slower than the 9980XE. I have tried it myself and actually put back my 9980XE.

If you are a hardware enthusiast I suggest getting a waterloop and overclocking the 9980XE to 4.5Ghz (or so) for all cores, that will give you a similar performance as lower core count Skylake X CPUs. Or maybe the 16 or 12 core and overclock them to 5Ghz per core? Possible....I guess. But you will still need a waterloop.

The "snappiness" is really important to me that's why I overclock.

For the 6900XT/6800XT, you just swap it.

Just make sure you are on Big Sur 11.4 and that you are on MacPro7,1 and disable WhateverGreen before putting in the RDNA2 card.

X299 is a dead end platform, its not worth upgrading the CPUs, unless youre going from a 8 core to 18 core.
 
Do not get the 10980XE. The 9980XE is a better CPU.

10980XE is actually a downgrade because Intel added more mitigation into the CPU itself and its slower than the 9980XE. I have tried it myself and actually put back my 9980XE.

If you are a hardware enthusiast I suggest getting a waterloop and overclocking the 9980XE to 4.5Ghz (or so) for all cores, that will give you a similar performance as lower core count Skylake X CPUs. Or maybe the 16 or 12 core and overclock them to 5Ghz per core? Possible....I guess. But you will still need a waterloop.
I don't wanna go through the hassle of watercooling, wouldn't be easier to swap to 10920X? It has higher clocks than 9980XE by default, easily cooled at stock by my Noctua D15 and I would gain 150€ that will cover most part of the loss taken by swapping 5700XT for 6800XT?
 
I don't wanna go through the hassle of watercooling, wouldn't be easier to swap to 10920X? It has higher clocks than 9980XE by default, easily cooled at stock by my Noctua D15 and I would gain 150€ that will cover most part of the loss taken by swapping 5700XT for 6800XT?

You can get a higher end AIO if you don't want to do a proper waterloop.

I don't understand why you wouldn't want to overclock these processors, they were built for that. The stock speeds are pretty tame and imo Intel just kept it there to appease their website specs and not go over a certain wattage.

Just get a good 360mm AIO and overclock your 9980XE. I am not really sure why you want to downgrade to a 10920x when you have a binned chip on your hands that has the ability to overclock easily with a decent AIO.

You can definitely do 4.0-4.2Ghz with a good AIO on all cores, or if you want to replicate the 10920x cores, just overclock 12 cores to 3.5Ghz and the other 6 cores leave it as default.

Remember that the "boost" cores are usually 1-2 cores, they are not all the cores. They advertise Turbo Boost as if it's all cores.

When you get a watercooling setup the whole point is to boost ALL the cores and have to run above the base frequency when it needs to, this is where the "snappiness" comes in that the CPU can jump to for example 4.5Ghz on all cores when necessary.
 
Status
Not open for further replies.
Back
Top