Contribute
Register

Gigabyte X299X - Catalina Support

Status
Not open for further replies.
Also, what is the indicated machine ID for the Skylake CPU? The Dorthania guide seems to indicate that iMac 17,1 is still the preferred one, but wouldn't a Mac Pro or iMac Pro be closer to the real-world counterpart?
Personally I've always thought Mac Pro 7,1 best fits these boards, and that's what I use. There used to be some annoyances with going with Mac Pro because of the PCIe and memory alerts, but Acidanthera's RestrictEvents.kext sorts that out nicely.

iMac 17,1 definitely doesn't sound ideal - for one thing it has an iGPU, which the i9-10980XE does not.

i9-10980XE is Cascade Lake-X, where the real Mac Pro 7,1 is Cascade Lake-W (Xeon). Not a perfect fit, but closer than iMac 17,1 for sure.

I know many X299 users choose iMac Pro. As far as I'm aware there's not really any practical difference between that and Mac Pro 7,1 - besides the need for RestrictEvents.kext on Mac Pro; both work fine. But the fact that iMac Pro is now discontinued is IMHO another reason to go Mac Pro.

Can't comment on the Aorus Master, but I'm reasonably pleased with the i9-10980XE. Seems like a decent CPU, especially if you're in a position to overclock it a bit (which requires good cooling, most likely with water - at least an AIO.) It's not going to compete with AMD Threadripper, but I for one wasn't willing to risk the various software incompatibilities that an AMD Hack has.

One thing you will want to check regarding the Aorus is whether it has the broken NVRAM that plague many X299 boards. It's not the end of the world but it is quite an annoyance, and requires a workaround for Big Sur install (details are in the X299 Big Sur thread.)
 
One thing you will want to check regarding the Aorus is whether it has the broken NVRAM that plague many X299 boards. It's not the end of the world but it is quite an annoyance, and requires a workaround for Big Sur install (details are in the X299 Big Sur thread.)

Oh boy. I've been hackintoshing for almost a decade so I'm not a newcomer to weird issues, but my pressure starts going up at the mere thought of the workarounds and other tweaks because a board is poorly designed or the BIOS is buggy.

Is ASUS a better option? Prices are through the roof at the moment, with most ASUS board going for north of $700.

P.S.

Another option for me would be the GIGABYTE X299 DESIGNARE EX. I heard that X299 are better than X299X boards, so even though the Designare is less powerful overall than the AORUS, I will gladly take less SATA ports and less headaches.
 
Last edited:
I've no experience on any X299 MB except the X299X Designare 10G, which - as described in recent posts here - has a lot of hardware onboard, but one of the buggiest BIOS' around.

I'd imagine you'd be better off asking these questions in the X299 Big Sur thread, as there's a lot more users following that thread than this one, and between them they have a wide range of X299 original and refresh boards. This thread is 99% about the Designare 10G.
 
Thank you. Yes I did already post there, but there doesn't seem to be as much activity on the topic as on this thread.
 
I thought I should feel happy after fixed my mobo boot up and run normally and all setting seems settled! But It doesn't react like a 14 core machine when working on CLO3D program! Every task I apply need to wait for finish even worst than my last 4 core hack! Wondering anyone here like mine? Does it cause my main system on M2 sit in M2M shared bandwidth with GPU? My Cinebench CPU (Multi-Core) result 17484 / Geekbench 11006 isn't good or bad?
Hope someone can guide me out please!
 
@TheBloke thanks for al you're shared information about you're experience with this board.
I'm not afraid to got for it.

What CPU cooler do you recommend with the i9-10980XE?

Point 2 is that I want it in an 19 inch enclosure so I can put it in a server rack.
So need to find one, that's easy to modifier if an AIO is required.
 
Point 2 is that I want it in an 19 inch enclosure so I can put it in a server rack.
So need to find one, that's easy to modifier if an AIO is required.
To that end I've heard good things about Rosewill, specifically the L4500, which one might even be able to snag off eBay for a good price. It has a good bit of room and there have been several people who managed to fit 360mm rad's into it.
 
I thought I should feel happy after fixed my mobo boot up and run normally and all setting seems settled! But It doesn't react like a 14 core machine when working on CLO3D program! Every task I apply need to wait for finish even worst than my last 4 core hack! Wondering anyone here like mine? Does it cause my main system on M2 sit in M2M shared bandwidth with GPU? My Cinebench CPU (Multi-Core) result 17484 / Geekbench 11006 isn't good or bad?
Hope someone can guide me out please!
I'm not really sure what's happening. Your CPU benchmark results sound reasonable. I'm currently getting 23202 from Cinebench R23, which is 1.33x times your result. I have 1.28 more cores plus an overclock, though not a huge overclock at the moment - I tuned down my overclock temporarily and also have the max temperature set to 90C which is thermal throttling me on my AIO.

So your Cinebench score sounds reasonable relative to mine. My current Geekbench 5.4 CPU score is 17456 which is 1.6x times your score. I imagine here my overclock is making a bigger difference, as Geekbench doesn't run hard/long enough to hit any thermal throttling. But your score still sounds reasonable for a 14-core CPU without an overclock. (My highest Geekbench score in past testing was 19063, but that included a higher CPU overclock plus a RAM OC with tightened timings.)

You have a 48-lane CPU, meaning an NVMe in slot M2M will reduce PCIEX8_1 (slot 2; second down from RAM slots) to x4 speed. So if you have a GPU in PCIEX8_1 then yes it might perform badly - it'd be like running it as an eGPU .

But surely you have your GPU in PCIEX16_1 (slot 1) or PCIEX16_2 (slot 3)? There's no resource sharing with the x16 slots, so you can run 2 x full x16 GPUs at once on this board, regardless of NVMe usage.

However it seems like this CLO3D software doesn't have any GPU acceleration on macOS, it's CPU only? (I Googled and their website says it only supports NVidia CUDA acceleration.) In which case I have no idea why it would perform badly given your CPU benchmark results seem fine.

In case there are other apps you're using which do use GPU, what are your Geekbench Compute GPU scores in Metal and OpenCL modes? My Vega 64 gets around 64000 in the Metal benchmark. I'd assume your 5700XT should be a bit higher than that.

Based on what you said I can't see any obvious problems. As long as your GPU is in one of the x16 slots you should get full performance out of it, and your CPU benchmark results sound OK.

Is it just CLO3D that performs badly? What about other software? Maybe there's some weird issue specific to that software.

Do you also have a Windows installation? If so, I'd do some testing in that also, to get a comparison. Install some of the software you use in Windows, and install some Windows monitoring software like HWInfo and check your core temperatures and other CPU/motherboard stats while running some high loads; maybe you're thermal throttling due to insufficient cooling?

You can also check temps in macOS using HWMonitorSMC2 - and also install Intel Power Gadget so that HWMonitorSMC2 can read all values from the CPU.
 
Last edited:
What CPU cooler do you recommend with the i9-10980XE?
I currently have the Coolermaster Masterliquid ML360R and that's the only modern cooler I have experience with at the moment. It's a decent AIO which has enabled me to overclock up to 4.6Ghz on all cores, albeit with thermal throttling in various situations. I got it because it was #1 in Tom's Hardware's 360mm AIO tests. It's an RGB cooler so it comes with all the RGB junk, but you can just not connect it and it's no issue.

(I was meant to be upgrading to a full custom water loop - the best option for a power-beast like this CPU - but 3 months later I'm still waiting for delivery. Never buy anything from watercool.de!)

The 10980XE's TDP is 165W, however with power consumption limits removed - and still at stock speeds - I believe it will draw more than that. Therefore I'd say that a 360mm AIO is definitely the preferred option if it's possible.

So if that rack case that byteminer mentioned can take a 360mm radiator I'd say go for a 360mm AIO. You should then be able to remove the power limits and run at stock speeds with no throttling, and also have a bit of headroom for an overclock if you should wish.

If it proves hard to fit a 360mm then you could consider a 240mm instead. That might still be fine at stock speeds, I'm not sure, but I think it'd definitely not be suitable for any overclocking.

I can't offer any advice on specific coolers as I currently only have experience of the ML360R, so I'd just say check the reviews and comparisons and find something that's highly regarded and will fit in your chosen chassis.
 
ok, AIO it will be then.
For my game rig I have 9900k overclock to 4.9 all core, rtx 2080 super all watercooled with an EKWB loop 1x 360 rad and 1x 120 rad
 
Status
Not open for further replies.
Back
Top