The benchmark is not only depend on CPU performance but depends on memory/graphics card and SSD etc.
Geekbench single/multicore is a CPU benchmark. It's not affected by GPU, nor I/O, e.g. not affected by graphics card or hard drive.
Geekbench completely separates GPU performance into its own benchmark, which they sometimes confusing refer to as "Compute". GPU is not involved at all in single/multi-core.
It's affected by RAM, but not very much, because everything compute-wise is designed to work from tiers of caches, of which there typically 2 or 3 tiers for the CPU. There's typically a very small per-core 1st level cache for instructions, a much larger 2nd per-core level for data and a very much larger 3rd level for data that's shared by all cores. The key to caches is they hide latency, which is the time required to set up memory access as RAM usage patterns flow. Overall, RAM speed is more about ensuring that RAM has enough throughput to handle simultaneous both compute across all cores and I/O. The whole system might be thought of like traffic signal design in a city where you want to make good use of roads and avoid gridlock. Like CPU clock speed limits, most parts of todays computers are gated by fundamentals of physics. So system tuning counts for a lot as making things faster gets trickier with each stepwise advance.
Note that finding representative Geekbench single / multi scores in the Geekbench Browser is a nightmare because the Geekbench squad doesn't vette anything and there's lots of extreme overclocking shenanigans, lame ID system and awful search. The scores reported here look very appropriate to this generation of kit.
Based on legit 12th gen scores, Alder Lake hacks are mixed bag:
They show a very strong single core 45% gain over the last Intel supported by Apple — nearly 2x the Xeon in top Mac Pro! That's a great step ahead. But iMac 20,x single core beats zillions$ Mac Pro so top-end market must care about things way beyond single core benchmarks.
The multicore is also strong for Mac but not for overclock hack. These Alder Lake scores are less than 10% gain over the Comet Lake 10900. Rocket Lake 11900 held the line with sacrifice of 2 cores).
Yes, multicore is inhibited due to not getting all the cores on board. But even if all cores were onboard, the 12900 multicore would only be on par with Coffee Lake 9980XE! Basically Intel has downgraded their 18-core i9 extreme edition and added fiddly bits for power to get it into mobile space, while across-the-board churning ahead with IPC advances to single-core to keep up with AMD/ARM. (IPC means instructions-per-clock, which is the internal parallelism of instruction execution within each core. Hyperthreading feature is one example of this sort of optimization)
So let's stand back from this particular project to consider that Apple bragging rights have always depended on a total SW/HW stack optimization. Today they have first-class in-house SoC (system on a chip) so their tuning is on top of brute force of any alternative architecture, plus they can optimize the SW architecture to their advantage no holds barred. Such a wholistic approach always excels.
The stick in the pudding for hack is there's no reason to think that Apple bragging rights can ever be matched for Apple workloads, eg FCX suite, on Intel 11th gen and beyond because there's nobody tuning Apple SW stack for Intel architectures anymore. Plus there's lots to think about for good system performance besides raw speed, like is your system throwing all your data away or being eaten by malware.
At the same time if you hope that Adobe / DaVinci workloads will still do well because blah, blah — er, those aren't being further tuned on Intel Mac either. So why not use them on Windows where the eco-system thrives and the needed resources to improve are available? (Maybe because your Windows state your system keeps getting eaten by malware?) Nobody here thinks they're going to keep hacking to close the bugs and performance gaps in apps!
So, while Apple M1 looks like a bold advance, Alder Lake looks like yet another basic evolutionary step in IA that's not yet mature in the PC market. Maybe in 2022, on Windows, it will come together. But let's review prospects: DDR5 solves problems for servers. PCIe5 is gated by tricky signaling problems which make it expensive, and currently has no support from GPU / storage makers. Samsung will have an enterprise SSD offering by next summer. For Alder Lake gaming, PCIe 5 seems to promise something about freeing up lanes for addt GPU, not increasing pipe to GPU — I'm not able to get a reading on this yet but it so far it could be one-hand giveth as other takes away. Regardless, PCIe5 is still the uncertain future. z690 board vendors are damning it with the moniker "PCIe-5 Ready!" which means the industry doesn't yet know what it's doing.
Overall, with 12th gen Intel seems confused and motivated to squash their top-end gaming into laptop to have a PC story against Apple. They would have learned about Apple's SoC directions long ago but maybe dismissed them.
If Alder Lake offers only incremental top-end performance with I/O potentials that won't hit stride until 13th gen, then this 12th gen stuff will never reach maturity for mac hack user.
All of this keeps pointing like a huge DO NOT ENTER sign of a dead-end for hackintosh along every dimension... unless Apple were to produce more Intel designs? Maybe they might, as a hedge?
Nahhh!
Forgive me. Can't avoid getting snarky.
Changing things up, I want to laud the efforts on this forum to get 12 gen running. How can I do this without seeming like a dork...? We stand on your shoulders!
I certainly want to know what's possible and to explore. Highly worthwhile stuff. Even if it means facing dread implications.
Great work here to help understand the future.
(We are living in a magical age. Everyone wonders about UFOs, but this technology is like an alien incursion. It's so far beyond what an ordinary person could achieve that it may as well come from another solar system. So props to all the ET welcoming committee members and delegates)