Contribute
Register

Z690 Chipset Motherboards and Alder Lake CPU

Having reached the 5th stage of grief -- acceptance -- my views on this are simple:
  • Let's assume E-cores never existed in Alder Lake.
  • We still get:
    • 20% uplift in performance per clock (IPC)
    • Ability to sustain all-core high frequencies indefinitely
    • 16 PCIe 5.0 lanes from CPU
    • 4 PCIe 4.0 lanes from CPU
    • Up to 12 PCIe 4.0 lanes from Chipset
    • Up to 16 PCIe 3.0 lanes from Chipset
    • DDR4 and DDR5 memory
    • USB 3.2 Gen 2x2 built into chipset
    • x8 DMI connection between CPU and Chipset (versus x4 in previous generations)
  • This translates into:
    • 4 or more NVMe SSD slots
    • 6 SATA ports without disabling M.2 slots
Now let's look at official pricing:
Comparison (forget about E-cores):
  • 6-core / 12-thread parts:
    • i5-11600K was $262
    • i5-12600K is $289
    • Difference = 10% more
  • 8-core / 16-thread mid series:
    • i7-11700K was $399
    • i7-12700K is $409
    • Difference = 2.5% more
  • 8-core / 16-thread high end:
    • i9-11900K was $539
    • i9-12900K is $589
    • Difference = 9.3% more
Motherboard prices are generally higher, but DDR4-based boards are relatively inexpensive. Gigabyte Z690 Aero G is only US$289.95.

So if we disable E-cores for Hackintosh, is that really such a bad thing?

We should remember that with Alder Lake we don't just get an upgraded processor, we also get a vastly upgraded chipset.
Completely agree with you! Alder lake processors certainly punch far above their weight in terms of price for performance.

  • 12600K not only beats the 5600x but also the 5800x (once a $450.00 processor at microcenter and elsewhere - if you could find it in stock).
  • The 12700k trades blows with the 5900x (which is still a $520.00+ processor) and wins in gaming. The 12700k is currently $399 at microcenter with a $20.00 discount if also you buy a z690 motherboard at time of purchase.
  • And the 12900k does the same to the mighty (and still expensive) 5950x.

Alder Lake is impressive.

And you’re right in that it runs macOS just fine without any major headaches, just some elbow grease needed to coax macOS to run it. And even though macOS wasn’t designed to use P + E cores on the x86 side (it does have functionality to handle hybrid ARM cores), macOS still benefits from Golden Cove’s single-threaded and multithreaded performance uplifts.

Speaking of Golden Cove, I’m hearing that Sapphire Rapids Xeons are coming in 2022 and will have possibly up to 56 Golden Cove cores. And no Gracemont. A mighty beast in its own right.
 
Also remember at the moment Alder Lake is running handicapped in macOS as we’re not taking full advantage of the full performance made available by the p+e cores. And yet, Alder Lake is still a beast.

No it's not really a beast.

Here is typical score for 9th generation 9980XE 18 core:

i9-9980XE: 1124 / 15380


Notice that Alder Lake 16-core multicore is not gaining. When Alder Lake E cores come up, it's benching around 16000.

Here is typical score for top Alder Lake firing on all cylinders:

i9-12900: 1900 / 17500


I wrote in previous post that Alder Lake is a mixed bag.

Single core performance has improved markedly in 3 generations:

• 9th gen: 1100 (Today's Mac Pro Xeon)

• 10th gen: 1300 (iMac 20,2)

• 11th gen: 1700 (Rocket Lake and iPhone)

• 12rh gen: 1900 (End of days so many challenges)

— I'm intentionally mixing and matching traits a little toamplify my point —

iPhone 13: 1650 / 4400


A phone has a 45% gain over today's Mac Pro depending on how you come at it.

Mac Pro 28 core blah blah


And this state of the hack report doesn't factor very important missing features like iGPU VDAdecode which was lost with 11th gen, with corresponding credit to Apple's Bionic, Media and crypto Engines on AppleSi.

The hack value proposition was in 2 dimensions: 1) lower priced for given performance, and 2) much better overall multicore via gamer OC.

Those days are over. Apple's laptops now take on Intel desktops and Apple's own Intel Pros in terms of total system performance.

With AppleSi it's become an Oranges and Apples comparison (haha) to IA. There's no longer any clear context for comparison. Apple can brute force into same arena as Intel, and exceed to purpose with tailoring of whole stack.

And more of todays and future Mac performance is coming from what Rocket Lake++ hack has to leave behind and Intel is not even clearly advancing: VDA, AVX enhancements. To the extent Intel is improving these old-timey features it will never again be optimized for Mac or its apps.

So I am unable to take heart in the observation that if Alder Lake E cores come up there will be some kind of parity... The hack will not be back as things stand for the future.

But there are a few more years left to enjoy.

As to how great and upgradable PCs are, this seems to be where a mild schizophrenia appears on the forum... If it's just about PC nerds claiming bragging rights over booting macOS, I get lost. Windoze forever!

It could be well past time to re-imagine what a PC should be, and I suspect it has to be more about changes in thinking about SW than HW.
 
No it's not really a beast.

Here is typical score for 9th generation 9980XE 18 core:

i9-9980XE: 1124 / 15380


Notice that Alder Lake 16-core multicore is not gaining. When Alder Lake E cores come up, it's benching around 16000.

Here is typical score for top Alder Lake firing on all cylinders:

i9-12900: 1900 / 17500


I wrote in previous post that Alder Lake is a mixed bag.

Single core performance has improved markedly in 3 generations:

• 9th gen: 1100 (Today's Mac Pro Xeon)

• 10th gen: 1300 (iMac 20,2)

• 11th gen: 1700 (Rocket Lake and iPhone)

• 12rh gen: 1900 (End of days so many challenges)

— I'm intentionally mixing and matching traits a little toamplify my point —

iPhone 13: 1650 / 4400


A phone has a 45% gain over today's Mac Pro depending on how you come at it.

Mac Pro 28 core blah blah


And this state of the hack report doesn't factor very important missing features like iGPU VDAdecode which was lost with 11th gen, with corresponding credit to Apple's Bionic, Media and crypto Engines on AppleSi.

The hack value proposition was in 2 dimensions: 1) lower priced for given performance, and 2) much better overall multicore via gamer OC.

Those days are over. Apple's laptops now take on Intel desktops and Apple's own Intel Pros in terms of total system performance.

With AppleSi it's become an Oranges and Apples comparison (haha) to IA. There's no longer any clear context for comparison. Apple can brute force into same arena as Intel, and exceed to purpose with tailoring of whole stack.

And more of todays and future Mac performance is coming from what Rocket Lake++ hack has to leave behind and Intel is not even clearly advancing: VDA, AVX enhancements. To the extent Intel is improving these old-timey features it will never again be optimized for Mac or its apps.

So I am unable to take heart in the observation that if Alder Lake E cores come up there will be some kind of parity... The hack will not be back as things stand for the future.

But there are a few more years left to enjoy.

As to how great and upgradable PCs are, this seems to be where a mild schizophrenia appears on the forum... If it's just about PC nerds claiming bragging rights over booting macOS, I get lost. Windoze forever!

It could be well past time to re-imagine what a PC should be, and I suspect it has to be more about changes in thinking about SW than HW.

You are not wrong on any of the points you’ve made. However you forgot to include the fact that people might have other motivations then you

#1 rule of the internet should really be: “Let people enjoy things”

Lots of people here enjoying things for various reasons, no need to imply that they do things “for bragging rights”. Celebrate that there’s a vibrant community having fun instead?
 
@StefanAM You've made an extraordinary find! I suppose vit9696 and the Acidanthera team found it was only necessary to provide a processor stub, as in their new SSDT-PLUG-ALT, without any connection to the actual processor device. And then quirk ProvideCurrentCpuInfo sufficiently disables the topology check to allow leaving all cores on.

This, however, introduces a minor complication: To leave all cores on and hyperthreading on in BIOS for other OSes, one then needs to have a specific SSDT depending on the CPU model; there no longer is a universal wrapping SSDT.
From the results by @sjoseph and @CaseySJ only the number of P cores matters, so there would only be two SSDTs at this point: 8 P-cores for i9-12900K or i7-12700K; 6 P-cores .

I have streamlined the code, to make future adaptations even easier, and compiled two SSDTs.
SSDT-PLUG-ALT-8P.aml for i9-12900K or i7-12700K
SSDT-PLUG-ALT-6P.aml for i5-12600K
These replace both SSDT-CPUR-Z690.aml and SSDT-PLUG.aml for those who use the EFIs from this thread, or replace SSDT-PLUG-ALT.aml for those who have switched to Acidanthera's SSDT, or use the Z690 Aero G EFI.
Quirk ProvideCurrentCpuInfo must be enabled, all P and cores enabled, hyperthreading enabled in BIOS.
If you can confirm that these SSDTs work, I'll update the "standard" EFI.

The simplification is to put a single _OSI("Darwin") test for the entire block. The base unit is then simply

where the three elements in bold have to match. CP00 has the additional _DSM method to add power management.
@etorix,

Just wanted you to know your SSDT-PLUG-ALT-8P.aml didn't work for my i9.
I had to use this one instead (which I have adjusted) for 12900K use.

Performance is impressive to say the least (with all cores plus HT on). Not even Windows 11 could reach this score!

Screenshot.png


Screenshot 1.png
 

Attachments

  • SSDT-PLUG-ALT.aml
    845 bytes · Views: 71

Alder Lake is impressive.
The results we’ve been posting in this thread corroborate this statement. Alder Lake (with Z690) is quite impressive.
 
Geekbench single/multicore is a CPU benchmark. It's not affected by GPU, nor I/O, e.g. not affected by graphics card or hard drive.

Geekbench completely separates GPU performance into its own benchmark, which they sometimes confusing refer to as "Compute". GPU is not involved at all in single/multi-core.
It's affected by RAM, but not very much, because everything compute-wise is designed to work from tiers of caches, of which there typically 2 or 3 tiers for the CPU. There's typically a very small per-core 1st level cache for instructions, a much larger 2nd per-core level for data and a very much larger 3rd level for data that's shared by all cores. The key to caches is they hide latency, which is the time required to set up memory access as RAM usage patterns flow. Overall, RAM speed is more about ensuring that RAM has enough throughput to handle simultaneous both compute across all cores and I/O. The whole system might be thought of like traffic signal design in a city where you want to make good use of roads and avoid gridlock. Like CPU clock speed limits, most parts of todays computers are gated by fundamentals of physics. So system tuning counts for a lot as making things faster gets trickier with each stepwise advance.

Note that finding representative Geekbench single / multi scores in the Geekbench Browser is a nightmare because the Geekbench squad doesn't vette anything and there's lots of extreme overclocking shenanigans, lame ID system and awful search. The scores reported here look very appropriate to this generation of kit.

Based on legit 12th gen scores, Alder Lake hacks are mixed bag:

They show a very strong single core 45% gain over the last Intel supported by Apple — nearly 2x the Xeon in top Mac Pro! That's a great step ahead. But iMac 20,x single core beats zillions$ Mac Pro so top-end market must care about things way beyond single core benchmarks.

The multicore is also strong for Mac but not for overclock hack. These Alder Lake scores are less than 10% gain over the Comet Lake 10900. Rocket Lake 11900 held the line with sacrifice of 2 cores).

Yes, multicore is inhibited due to not getting all the cores on board. But even if all cores were onboard, the 12900 multicore would only be on par with Coffee Lake 9980XE! Basically Intel has downgraded their 18-core i9 extreme edition and added fiddly bits for power to get it into mobile space, while across-the-board churning ahead with IPC advances to single-core to keep up with AMD/ARM. (IPC means instructions-per-clock, which is the internal parallelism of instruction execution within each core. Hyperthreading feature is one example of this sort of optimization)

So let's stand back from this particular project to consider that Apple bragging rights have always depended on a total SW/HW stack optimization. Today they have first-class in-house SoC (system on a chip) so their tuning is on top of brute force of any alternative architecture, plus they can optimize the SW architecture to their advantage no holds barred. Such a wholistic approach always excels.

The stick in the pudding for hack is there's no reason to think that Apple bragging rights can ever be matched for Apple workloads, eg FCX suite, on Intel 11th gen and beyond because there's nobody tuning Apple SW stack for Intel architectures anymore. Plus there's lots to think about for good system performance besides raw speed, like is your system throwing all your data away or being eaten by malware.

At the same time if you hope that Adobe / DaVinci workloads will still do well because blah, blah — er, those aren't being further tuned on Intel Mac either. So why not use them on Windows where the eco-system thrives and the needed resources to improve are available? (Maybe because your Windows state your system keeps getting eaten by malware?) Nobody here thinks they're going to keep hacking to close the bugs and performance gaps in apps!

So, while Apple M1 looks like a bold advance, Alder Lake looks like yet another basic evolutionary step in IA that's not yet mature in the PC market. Maybe in 2022, on Windows, it will come together. But let's review prospects: DDR5 solves problems for servers. PCIe5 is gated by tricky signaling problems which make it expensive, and currently has no support from GPU / storage makers. Samsung will have an enterprise SSD offering by next summer. For Alder Lake gaming, PCIe 5 seems to promise something about freeing up lanes for addt GPU, not increasing pipe to GPU — I'm not able to get a reading on this yet but it so far it could be one-hand giveth as other takes away. Regardless, PCIe5 is still the uncertain future. z690 board vendors are damning it with the moniker "PCIe-5 Ready!" which means the industry doesn't yet know what it's doing.

Overall, with 12th gen Intel seems confused and motivated to squash their top-end gaming into laptop to have a PC story against Apple. They would have learned about Apple's SoC directions long ago but maybe dismissed them.

If Alder Lake offers only incremental top-end performance with I/O potentials that won't hit stride until 13th gen, then this 12th gen stuff will never reach maturity for mac hack user.

All of this keeps pointing like a huge DO NOT ENTER sign of a dead-end for hackintosh along every dimension... unless Apple were to produce more Intel designs? Maybe they might, as a hedge?

Nahhh!

Forgive me. Can't avoid getting snarky.

Changing things up, I want to laud the efforts on this forum to get 12 gen running. How can I do this without seeming like a dork...? We stand on your shoulders!

I certainly want to know what's possible and to explore. Highly worthwhile stuff. Even if it means facing dread implications.

Great work here to help understand the future.

(We are living in a magical age. Everyone wonders about UFOs, but this technology is like an alien incursion. It's so far beyond what an ordinary person could achieve that it may as well come from another solar system. So props to all the ET welcoming committee members and delegates)
I have said this when some one mentioned the benchmark difference between Windows and macOS for around 1K for i9 12900K.
Please NOTE that memory will affect benchmark by a good margin.
see below two benchmarks with different memory size for 17 12700K CPU:
Case 1 with 16GB memory

16GB_RX560.png


Case 2 with 32GB memory
32gb_rx5602G?B.png

 
So if we disable E-cores for Hackintosh, is that really such a bad thing?
Why are we considering disabling E-cores again?
Doesn't SSDT-PLUG-ALT allow us to use all cores and get Windows like performance?
 
Why are we considering disabling E-cores again?
Doesn't SSDT-PLUG-ALT allow us to use all cores and get Windows like performance?
The SSDT disables E-cores in macOS only. There is no option at this time to use P + E + HT in macOS.
 
I'm planning to build a new hackintosh with the following main components:
- Intel i7-12700K
- Gigabyte Z690 UD AX DDR4
- DDR4 16GB x 2
- RX580

I've been following this thread for a while and noticed many ppl chose Gigabyte Z690 Aero D DDR4 board. However, I don't need thunderbolt 4 and I would rather have 2 extra PCIe x1 slots. Since nobody has tried the UD AX DDR4 on a hackintosh yet, what are my chances of getting it to work?
 
Back
Top